Jan 30 12:54:25.958735 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 12:54:25.958758 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 12:54:25.958769 kernel: KASLR enabled Jan 30 12:54:25.958775 kernel: efi: EFI v2.7 by EDK II Jan 30 12:54:25.958780 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 30 12:54:25.958786 kernel: random: crng init done Jan 30 12:54:25.958792 kernel: secureboot: Secure boot disabled Jan 30 12:54:25.958798 kernel: ACPI: Early table checksum verification disabled Jan 30 12:54:25.958804 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 30 12:54:25.958811 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 12:54:25.958817 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:54:25.958823 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:54:25.958829 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:54:25.958835 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:54:25.958842 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:54:25.958850 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:54:25.958856 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:54:25.958863 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:54:25.958869 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:54:25.958875 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 12:54:25.958881 kernel: NUMA: Failed to initialise from firmware Jan 30 12:54:25.958887 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:54:25.958893 kernel: NUMA: NODE_DATA [mem 0xdc956800-0xdc95bfff] Jan 30 12:54:25.958899 kernel: Zone ranges: Jan 30 12:54:25.958905 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:54:25.958914 kernel: DMA32 empty Jan 30 12:54:25.958923 kernel: Normal empty Jan 30 12:54:25.958930 kernel: Movable zone start for each node Jan 30 12:54:25.958936 kernel: Early memory node ranges Jan 30 12:54:25.958942 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 30 12:54:25.958948 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 30 12:54:25.958954 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 30 12:54:25.958960 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 12:54:25.958966 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 12:54:25.958972 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 12:54:25.958978 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 12:54:25.958984 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 12:54:25.958991 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 12:54:25.958997 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:54:25.959004 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 12:54:25.959012 kernel: psci: probing for conduit method from ACPI. Jan 30 12:54:25.959019 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 12:54:25.959025 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 12:54:25.959033 kernel: psci: Trusted OS migration not required Jan 30 12:54:25.959040 kernel: psci: SMC Calling Convention v1.1 Jan 30 12:54:25.959046 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 12:54:25.959052 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 12:54:25.959059 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 12:54:25.959066 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 12:54:25.959073 kernel: Detected PIPT I-cache on CPU0 Jan 30 12:54:25.959079 kernel: CPU features: detected: GIC system register CPU interface Jan 30 12:54:25.959085 kernel: CPU features: detected: Hardware dirty bit management Jan 30 12:54:25.959092 kernel: CPU features: detected: Spectre-v4 Jan 30 12:54:25.959099 kernel: CPU features: detected: Spectre-BHB Jan 30 12:54:25.959105 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 12:54:25.959112 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 12:54:25.959118 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 12:54:25.959125 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 12:54:25.959131 kernel: alternatives: applying boot alternatives Jan 30 12:54:25.959138 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 12:54:25.959145 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 12:54:25.959151 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 12:54:25.959158 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 12:54:25.959164 kernel: Fallback order for Node 0: 0 Jan 30 12:54:25.959172 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 12:54:25.959178 kernel: Policy zone: DMA Jan 30 12:54:25.959185 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 12:54:25.959191 kernel: software IO TLB: area num 4. Jan 30 12:54:25.959198 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 12:54:25.959204 kernel: Memory: 2385932K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186356K reserved, 0K cma-reserved) Jan 30 12:54:25.959211 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 12:54:25.959217 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 12:54:25.959224 kernel: rcu: RCU event tracing is enabled. Jan 30 12:54:25.959231 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 12:54:25.959237 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 12:54:25.959244 kernel: Tracing variant of Tasks RCU enabled. Jan 30 12:54:25.959252 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 12:54:25.959258 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 12:54:25.959265 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 12:54:25.959271 kernel: GICv3: 256 SPIs implemented Jan 30 12:54:25.959277 kernel: GICv3: 0 Extended SPIs implemented Jan 30 12:54:25.959283 kernel: Root IRQ handler: gic_handle_irq Jan 30 12:54:25.959290 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 12:54:25.959296 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 12:54:25.959303 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 12:54:25.959309 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 12:54:25.959315 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 12:54:25.959324 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 12:54:25.959330 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 12:54:25.959337 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 12:54:25.959343 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:54:25.959350 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 12:54:25.959356 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 12:54:25.959363 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 12:54:25.959369 kernel: arm-pv: using stolen time PV Jan 30 12:54:25.959375 kernel: Console: colour dummy device 80x25 Jan 30 12:54:25.959382 kernel: ACPI: Core revision 20230628 Jan 30 12:54:25.959389 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 12:54:25.959405 kernel: pid_max: default: 32768 minimum: 301 Jan 30 12:54:25.959412 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 12:54:25.959419 kernel: landlock: Up and running. Jan 30 12:54:25.959425 kernel: SELinux: Initializing. Jan 30 12:54:25.959432 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:54:25.959438 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:54:25.959445 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:54:25.959452 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:54:25.959458 kernel: rcu: Hierarchical SRCU implementation. Jan 30 12:54:25.959467 kernel: rcu: Max phase no-delay instances is 400. Jan 30 12:54:25.959474 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 12:54:25.959480 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 12:54:25.959487 kernel: Remapping and enabling EFI services. Jan 30 12:54:25.959494 kernel: smp: Bringing up secondary CPUs ... Jan 30 12:54:25.959500 kernel: Detected PIPT I-cache on CPU1 Jan 30 12:54:25.959507 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 12:54:25.959514 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 12:54:25.959521 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:54:25.959529 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 12:54:25.959536 kernel: Detected PIPT I-cache on CPU2 Jan 30 12:54:25.959548 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 12:54:25.959557 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 12:54:25.959564 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:54:25.959571 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 12:54:25.959578 kernel: Detected PIPT I-cache on CPU3 Jan 30 12:54:25.959585 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 12:54:25.959592 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 12:54:25.959601 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:54:25.959607 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 12:54:25.959615 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 12:54:25.959621 kernel: SMP: Total of 4 processors activated. Jan 30 12:54:25.959628 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 12:54:25.959636 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 12:54:25.959643 kernel: CPU features: detected: Common not Private translations Jan 30 12:54:25.959649 kernel: CPU features: detected: CRC32 instructions Jan 30 12:54:25.959658 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 12:54:25.959665 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 12:54:25.959672 kernel: CPU features: detected: LSE atomic instructions Jan 30 12:54:25.959679 kernel: CPU features: detected: Privileged Access Never Jan 30 12:54:25.959686 kernel: CPU features: detected: RAS Extension Support Jan 30 12:54:25.959693 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 12:54:25.959700 kernel: CPU: All CPU(s) started at EL1 Jan 30 12:54:25.959707 kernel: alternatives: applying system-wide alternatives Jan 30 12:54:25.959714 kernel: devtmpfs: initialized Jan 30 12:54:25.959736 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 12:54:25.959746 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 12:54:25.959753 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 12:54:25.959760 kernel: SMBIOS 3.0.0 present. Jan 30 12:54:25.959767 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 30 12:54:25.959774 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 12:54:25.959781 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 12:54:25.959788 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 12:54:25.959795 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 12:54:25.959804 kernel: audit: initializing netlink subsys (disabled) Jan 30 12:54:25.959811 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 30 12:54:25.959818 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 12:54:25.959825 kernel: cpuidle: using governor menu Jan 30 12:54:25.959832 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 12:54:25.959839 kernel: ASID allocator initialised with 32768 entries Jan 30 12:54:25.959846 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 12:54:25.959853 kernel: Serial: AMBA PL011 UART driver Jan 30 12:54:25.959860 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 12:54:25.959869 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 12:54:25.959876 kernel: Modules: 508880 pages in range for PLT usage Jan 30 12:54:25.959883 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 12:54:25.959890 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 12:54:25.959898 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 12:54:25.959905 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 12:54:25.959912 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 12:54:25.959920 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 12:54:25.959927 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 12:54:25.959936 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 12:54:25.959943 kernel: ACPI: Added _OSI(Module Device) Jan 30 12:54:25.959950 kernel: ACPI: Added _OSI(Processor Device) Jan 30 12:54:25.959957 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 12:54:25.959964 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 12:54:25.959971 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 12:54:25.959978 kernel: ACPI: Interpreter enabled Jan 30 12:54:25.959985 kernel: ACPI: Using GIC for interrupt routing Jan 30 12:54:25.959992 kernel: ACPI: MCFG table detected, 1 entries Jan 30 12:54:25.959999 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 12:54:25.960007 kernel: printk: console [ttyAMA0] enabled Jan 30 12:54:25.960014 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 12:54:25.960151 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 12:54:25.960222 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 12:54:25.960287 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 12:54:25.960349 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 12:54:25.960421 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 12:54:25.960435 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 12:54:25.960442 kernel: PCI host bridge to bus 0000:00 Jan 30 12:54:25.960513 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 12:54:25.960571 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 12:54:25.960627 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 12:54:25.960684 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 12:54:25.960846 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 12:54:25.960928 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 12:54:25.960994 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 12:54:25.961061 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 12:54:25.961125 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:54:25.961205 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:54:25.961270 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 12:54:25.961338 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 12:54:25.961404 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 12:54:25.961467 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 12:54:25.961524 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 12:54:25.961533 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 12:54:25.961540 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 12:54:25.961547 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 12:54:25.961554 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 12:54:25.961565 kernel: iommu: Default domain type: Translated Jan 30 12:54:25.961572 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 12:54:25.961579 kernel: efivars: Registered efivars operations Jan 30 12:54:25.961585 kernel: vgaarb: loaded Jan 30 12:54:25.961592 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 12:54:25.961599 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 12:54:25.961606 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 12:54:25.961613 kernel: pnp: PnP ACPI init Jan 30 12:54:25.961690 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 12:54:25.961702 kernel: pnp: PnP ACPI: found 1 devices Jan 30 12:54:25.961710 kernel: NET: Registered PF_INET protocol family Jan 30 12:54:25.961716 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 12:54:25.961734 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 12:54:25.961746 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 12:54:25.961753 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 12:54:25.961760 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 12:54:25.961767 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 12:54:25.961778 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:54:25.961785 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:54:25.961791 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 12:54:25.961799 kernel: PCI: CLS 0 bytes, default 64 Jan 30 12:54:25.961805 kernel: kvm [1]: HYP mode not available Jan 30 12:54:25.961813 kernel: Initialise system trusted keyrings Jan 30 12:54:25.961820 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 12:54:25.961827 kernel: Key type asymmetric registered Jan 30 12:54:25.961834 kernel: Asymmetric key parser 'x509' registered Jan 30 12:54:25.961841 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 12:54:25.961849 kernel: io scheduler mq-deadline registered Jan 30 12:54:25.961856 kernel: io scheduler kyber registered Jan 30 12:54:25.961863 kernel: io scheduler bfq registered Jan 30 12:54:25.961871 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 12:54:25.961878 kernel: ACPI: button: Power Button [PWRB] Jan 30 12:54:25.961886 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 12:54:25.961958 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 12:54:25.961968 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 12:54:25.961975 kernel: thunder_xcv, ver 1.0 Jan 30 12:54:25.961984 kernel: thunder_bgx, ver 1.0 Jan 30 12:54:25.961991 kernel: nicpf, ver 1.0 Jan 30 12:54:25.961998 kernel: nicvf, ver 1.0 Jan 30 12:54:25.962070 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 12:54:25.962133 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T12:54:25 UTC (1738241665) Jan 30 12:54:25.962143 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 12:54:25.962150 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 12:54:25.962158 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 12:54:25.962167 kernel: watchdog: Hard watchdog permanently disabled Jan 30 12:54:25.962175 kernel: NET: Registered PF_INET6 protocol family Jan 30 12:54:25.962182 kernel: Segment Routing with IPv6 Jan 30 12:54:25.962189 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 12:54:25.962196 kernel: NET: Registered PF_PACKET protocol family Jan 30 12:54:25.962204 kernel: Key type dns_resolver registered Jan 30 12:54:25.962210 kernel: registered taskstats version 1 Jan 30 12:54:25.962218 kernel: Loading compiled-in X.509 certificates Jan 30 12:54:25.962225 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 12:54:25.962234 kernel: Key type .fscrypt registered Jan 30 12:54:25.962241 kernel: Key type fscrypt-provisioning registered Jan 30 12:54:25.962248 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 12:54:25.962255 kernel: ima: Allocated hash algorithm: sha1 Jan 30 12:54:25.962262 kernel: ima: No architecture policies found Jan 30 12:54:25.962270 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 12:54:25.962277 kernel: clk: Disabling unused clocks Jan 30 12:54:25.962284 kernel: Freeing unused kernel memory: 39936K Jan 30 12:54:25.962293 kernel: Run /init as init process Jan 30 12:54:25.962300 kernel: with arguments: Jan 30 12:54:25.962307 kernel: /init Jan 30 12:54:25.962313 kernel: with environment: Jan 30 12:54:25.962320 kernel: HOME=/ Jan 30 12:54:25.962327 kernel: TERM=linux Jan 30 12:54:25.962334 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 12:54:25.962342 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:54:25.962352 systemd[1]: Detected virtualization kvm. Jan 30 12:54:25.962361 systemd[1]: Detected architecture arm64. Jan 30 12:54:25.962373 systemd[1]: Running in initrd. Jan 30 12:54:25.962380 systemd[1]: No hostname configured, using default hostname. Jan 30 12:54:25.962388 systemd[1]: Hostname set to . Jan 30 12:54:25.962403 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:54:25.962411 systemd[1]: Queued start job for default target initrd.target. Jan 30 12:54:25.962418 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:54:25.962429 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:54:25.962437 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 12:54:25.962445 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:54:25.962452 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 12:54:25.962460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 12:54:25.962470 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 12:54:25.962478 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 12:54:25.962487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:54:25.962495 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:54:25.962503 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:54:25.962510 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:54:25.962518 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:54:25.962526 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:54:25.962534 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:54:25.962541 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:54:25.962549 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:54:25.962559 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:54:25.962566 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:54:25.962574 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:54:25.962582 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:54:25.962589 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:54:25.962597 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 12:54:25.962605 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:54:25.962612 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 12:54:25.962622 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 12:54:25.962629 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:54:25.962637 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:54:25.962644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:54:25.962652 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 12:54:25.962659 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:54:25.962667 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 12:54:25.962677 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:54:25.962684 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:54:25.962692 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:54:25.962730 systemd-journald[238]: Collecting audit messages is disabled. Jan 30 12:54:25.962753 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 12:54:25.962760 kernel: Bridge firewalling registered Jan 30 12:54:25.962768 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:54:25.962776 systemd-journald[238]: Journal started Jan 30 12:54:25.962801 systemd-journald[238]: Runtime Journal (/run/log/journal/68e066d4a9b94a80a1525e22a7abd092) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:54:25.939401 systemd-modules-load[239]: Inserted module 'overlay' Jan 30 12:54:25.958875 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 30 12:54:25.968261 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:54:25.968286 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:54:25.969677 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:54:25.974055 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:54:25.977157 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:54:25.979265 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:54:25.988758 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:54:25.992070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:54:25.993698 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:54:26.003929 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 12:54:26.006530 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:54:26.015858 dracut-cmdline[275]: dracut-dracut-053 Jan 30 12:54:26.018931 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 12:54:26.037370 systemd-resolved[277]: Positive Trust Anchors: Jan 30 12:54:26.037388 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:54:26.037430 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:54:26.045137 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 30 12:54:26.046240 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:54:26.047473 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:54:26.103755 kernel: SCSI subsystem initialized Jan 30 12:54:26.113754 kernel: Loading iSCSI transport class v2.0-870. Jan 30 12:54:26.127792 kernel: iscsi: registered transport (tcp) Jan 30 12:54:26.139764 kernel: iscsi: registered transport (qla4xxx) Jan 30 12:54:26.139826 kernel: QLogic iSCSI HBA Driver Jan 30 12:54:26.190516 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 12:54:26.200911 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 12:54:26.220316 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 12:54:26.220415 kernel: device-mapper: uevent: version 1.0.3 Jan 30 12:54:26.220427 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 12:54:26.273786 kernel: raid6: neonx8 gen() 15786 MB/s Jan 30 12:54:26.290773 kernel: raid6: neonx4 gen() 15796 MB/s Jan 30 12:54:26.307783 kernel: raid6: neonx2 gen() 13353 MB/s Jan 30 12:54:26.324781 kernel: raid6: neonx1 gen() 10516 MB/s Jan 30 12:54:26.341851 kernel: raid6: int64x8 gen() 6782 MB/s Jan 30 12:54:26.358766 kernel: raid6: int64x4 gen() 7341 MB/s Jan 30 12:54:26.375774 kernel: raid6: int64x2 gen() 6099 MB/s Jan 30 12:54:26.392907 kernel: raid6: int64x1 gen() 5056 MB/s Jan 30 12:54:26.392974 kernel: raid6: using algorithm neonx4 gen() 15796 MB/s Jan 30 12:54:26.410912 kernel: raid6: .... xor() 12416 MB/s, rmw enabled Jan 30 12:54:26.410984 kernel: raid6: using neon recovery algorithm Jan 30 12:54:26.415749 kernel: xor: measuring software checksum speed Jan 30 12:54:26.416971 kernel: 8regs : 18834 MB/sec Jan 30 12:54:26.417003 kernel: 32regs : 20833 MB/sec Jan 30 12:54:26.418253 kernel: arm64_neon : 27570 MB/sec Jan 30 12:54:26.418275 kernel: xor: using function: arm64_neon (27570 MB/sec) Jan 30 12:54:26.475156 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 12:54:26.490896 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:54:26.502955 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:54:26.516616 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 30 12:54:26.521235 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:54:26.527988 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 12:54:26.542760 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jan 30 12:54:26.574706 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:54:26.587921 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:54:26.635662 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:54:26.645130 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 12:54:26.659955 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 12:54:26.661991 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:54:26.666860 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:54:26.668251 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:54:26.676985 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 12:54:26.686158 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 12:54:26.696858 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 12:54:26.696978 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 12:54:26.696991 kernel: GPT:9289727 != 19775487 Jan 30 12:54:26.697000 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 12:54:26.697009 kernel: GPT:9289727 != 19775487 Jan 30 12:54:26.697018 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 12:54:26.697026 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:54:26.689799 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:54:26.695194 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:54:26.695295 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:54:26.700972 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:54:26.702133 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:54:26.702701 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:54:26.704419 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:54:26.715366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:54:26.726066 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (517) Jan 30 12:54:26.726091 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (522) Jan 30 12:54:26.731660 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 12:54:26.735596 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 12:54:26.737484 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 12:54:26.740426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:54:26.750074 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:54:26.757472 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 12:54:26.772936 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 12:54:26.777942 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:54:26.782553 disk-uuid[552]: Primary Header is updated. Jan 30 12:54:26.782553 disk-uuid[552]: Secondary Entries is updated. Jan 30 12:54:26.782553 disk-uuid[552]: Secondary Header is updated. Jan 30 12:54:26.789752 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:54:26.802925 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:54:27.805533 disk-uuid[553]: The operation has completed successfully. Jan 30 12:54:27.806769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:54:27.847766 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 12:54:27.847877 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 12:54:27.854949 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 12:54:27.861567 sh[572]: Success Jan 30 12:54:27.889759 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 12:54:27.941484 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 12:54:27.943686 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 12:54:27.946087 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 12:54:27.961078 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 12:54:27.961137 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:54:27.961148 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 12:54:27.961163 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 12:54:27.961885 kernel: BTRFS info (device dm-0): using free space tree Jan 30 12:54:27.966578 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 12:54:27.969356 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 12:54:27.971699 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 12:54:27.973589 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 12:54:28.000193 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 12:54:28.000257 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:54:28.000276 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:54:28.006946 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:54:28.016109 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 12:54:28.017510 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 12:54:28.027263 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 12:54:28.034039 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 12:54:28.106966 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:54:28.123928 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:54:28.156413 systemd-networkd[762]: lo: Link UP Jan 30 12:54:28.156425 systemd-networkd[762]: lo: Gained carrier Jan 30 12:54:28.157571 systemd-networkd[762]: Enumeration completed Jan 30 12:54:28.157686 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:54:28.159822 systemd[1]: Reached target network.target - Network. Jan 30 12:54:28.160424 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:54:28.160427 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:54:28.161265 systemd-networkd[762]: eth0: Link UP Jan 30 12:54:28.161269 systemd-networkd[762]: eth0: Gained carrier Jan 30 12:54:28.161277 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:54:28.179088 ignition[680]: Ignition 2.20.0 Jan 30 12:54:28.179113 ignition[680]: Stage: fetch-offline Jan 30 12:54:28.179170 ignition[680]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:54:28.179181 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:54:28.179353 ignition[680]: parsed url from cmdline: "" Jan 30 12:54:28.179356 ignition[680]: no config URL provided Jan 30 12:54:28.179360 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 12:54:28.183809 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:54:28.179369 ignition[680]: no config at "/usr/lib/ignition/user.ign" Jan 30 12:54:28.179414 ignition[680]: op(1): [started] loading QEMU firmware config module Jan 30 12:54:28.179419 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 12:54:28.191315 ignition[680]: op(1): [finished] loading QEMU firmware config module Jan 30 12:54:28.230680 ignition[680]: parsing config with SHA512: a6654936d2128deaa0138ff8599e3792f2d761b249265416fd2d09c9a18f56a7959f65dcf2bced08edbe66b524200b2aeb03879a135b9dc174bdd659fe5790df Jan 30 12:54:28.235942 unknown[680]: fetched base config from "system" Jan 30 12:54:28.235952 unknown[680]: fetched user config from "qemu" Jan 30 12:54:28.237439 systemd-resolved[277]: Detected conflict on linux IN A 10.0.0.65 Jan 30 12:54:28.237420 ignition[680]: fetch-offline: fetch-offline passed Jan 30 12:54:28.237449 systemd-resolved[277]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Jan 30 12:54:28.237553 ignition[680]: Ignition finished successfully Jan 30 12:54:28.239430 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:54:28.241496 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 12:54:28.252959 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 12:54:28.264861 ignition[774]: Ignition 2.20.0 Jan 30 12:54:28.264871 ignition[774]: Stage: kargs Jan 30 12:54:28.265053 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:54:28.265063 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:54:28.268013 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 12:54:28.266037 ignition[774]: kargs: kargs passed Jan 30 12:54:28.266092 ignition[774]: Ignition finished successfully Jan 30 12:54:28.282943 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 12:54:28.293799 ignition[783]: Ignition 2.20.0 Jan 30 12:54:28.293817 ignition[783]: Stage: disks Jan 30 12:54:28.294007 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:54:28.294017 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:54:28.294980 ignition[783]: disks: disks passed Jan 30 12:54:28.297793 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 12:54:28.295037 ignition[783]: Ignition finished successfully Jan 30 12:54:28.299884 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 12:54:28.301316 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:54:28.303435 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:54:28.305114 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:54:28.307155 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:54:28.313904 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 12:54:28.332070 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 12:54:28.337669 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 12:54:28.347886 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 12:54:28.409888 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 12:54:28.411489 kernel: EXT4-fs (vda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 12:54:28.411270 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 12:54:28.431889 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:54:28.433926 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 12:54:28.434964 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 12:54:28.435014 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 12:54:28.435040 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:54:28.444759 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (803) Jan 30 12:54:28.446911 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 12:54:28.449097 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 12:54:28.449125 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:54:28.449135 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:54:28.451338 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 12:54:28.454344 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:54:28.456316 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:54:28.507153 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 12:54:28.512180 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jan 30 12:54:28.516471 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 12:54:28.519761 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 12:54:28.609564 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 12:54:28.621942 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 12:54:28.624533 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 12:54:28.629736 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 12:54:28.651258 ignition[916]: INFO : Ignition 2.20.0 Jan 30 12:54:28.651258 ignition[916]: INFO : Stage: mount Jan 30 12:54:28.653796 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:54:28.653796 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:54:28.653796 ignition[916]: INFO : mount: mount passed Jan 30 12:54:28.653796 ignition[916]: INFO : Ignition finished successfully Jan 30 12:54:28.654524 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 12:54:28.656054 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 12:54:28.665839 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 12:54:28.958640 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 12:54:28.971937 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:54:28.981745 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Jan 30 12:54:28.984940 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 12:54:28.984980 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:54:28.984991 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:54:28.987747 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:54:28.989275 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:54:29.013587 ignition[946]: INFO : Ignition 2.20.0 Jan 30 12:54:29.013587 ignition[946]: INFO : Stage: files Jan 30 12:54:29.015352 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:54:29.015352 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:54:29.015352 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Jan 30 12:54:29.019030 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 12:54:29.019030 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 12:54:29.019030 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 12:54:29.019030 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 12:54:29.019030 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 12:54:29.018647 unknown[946]: wrote ssh authorized keys file for user: core Jan 30 12:54:29.026589 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 12:54:29.026589 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 12:54:29.061784 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 12:54:29.141261 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 12:54:29.141261 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:54:29.145525 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 12:54:29.395291 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 12:54:29.594590 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:54:29.594590 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 12:54:29.598543 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:54:29.598543 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 12:54:29.598543 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 12:54:29.598543 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 12:54:29.598543 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:54:29.598543 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:54:29.598543 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 12:54:29.598543 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 12:54:29.629866 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:54:29.638027 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:54:29.639708 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 12:54:29.639708 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 12:54:29.639708 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 12:54:29.639708 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:54:29.639708 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:54:29.639708 ignition[946]: INFO : files: files passed Jan 30 12:54:29.639708 ignition[946]: INFO : Ignition finished successfully Jan 30 12:54:29.641312 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 12:54:29.654084 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 12:54:29.656264 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 12:54:29.661842 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 12:54:29.661980 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 12:54:29.668687 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 12:54:29.672893 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:54:29.672893 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:54:29.676418 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:54:29.679564 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:54:29.681260 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 12:54:29.691934 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 12:54:29.720746 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 12:54:29.720869 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 12:54:29.723357 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 12:54:29.725359 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 12:54:29.727409 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 12:54:29.738980 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 12:54:29.753547 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:54:29.763939 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 12:54:29.774018 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:54:29.775495 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:54:29.778411 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 12:54:29.780525 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 12:54:29.780663 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:54:29.783993 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 12:54:29.785244 systemd[1]: Stopped target basic.target - Basic System. Jan 30 12:54:29.787381 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 12:54:29.789373 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:54:29.791247 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 12:54:29.793351 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 12:54:29.795544 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:54:29.798050 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 12:54:29.799890 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 12:54:29.801856 systemd[1]: Stopped target swap.target - Swaps. Jan 30 12:54:29.803589 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 12:54:29.803738 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:54:29.806173 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:54:29.808108 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:54:29.810082 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 12:54:29.810258 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:54:29.812122 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 12:54:29.812254 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 12:54:29.815022 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 12:54:29.815162 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:54:29.817576 systemd[1]: Stopped target paths.target - Path Units. Jan 30 12:54:29.819575 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 12:54:29.820480 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:54:29.821849 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 12:54:29.823608 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 12:54:29.825572 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 12:54:29.825663 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:54:29.827328 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 12:54:29.827425 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:54:29.829245 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 12:54:29.829377 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:54:29.831855 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 12:54:29.831961 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 12:54:29.840931 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 12:54:29.842863 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 12:54:29.843031 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:54:29.846435 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 12:54:29.848156 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 12:54:29.848338 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:54:29.851255 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 12:54:29.851487 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:54:29.861938 ignition[1001]: INFO : Ignition 2.20.0 Jan 30 12:54:29.861938 ignition[1001]: INFO : Stage: umount Jan 30 12:54:29.861938 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:54:29.861938 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:54:29.871888 ignition[1001]: INFO : umount: umount passed Jan 30 12:54:29.871888 ignition[1001]: INFO : Ignition finished successfully Jan 30 12:54:29.864567 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 12:54:29.867357 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 12:54:29.867514 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 12:54:29.869853 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 12:54:29.869963 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 12:54:29.876067 systemd[1]: Stopped target network.target - Network. Jan 30 12:54:29.883122 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 12:54:29.883215 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 12:54:29.884840 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 12:54:29.884899 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 12:54:29.896606 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 12:54:29.896674 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 12:54:29.898441 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 12:54:29.898499 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 12:54:29.900553 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 12:54:29.902215 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 12:54:29.909804 systemd-networkd[762]: eth0: DHCPv6 lease lost Jan 30 12:54:29.913317 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 12:54:29.914680 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 12:54:29.918708 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 12:54:29.918888 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 12:54:29.922051 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 12:54:29.922120 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:54:29.933919 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 12:54:29.935065 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 12:54:29.935166 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:54:29.937895 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:54:29.937963 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:54:29.940858 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 12:54:29.940929 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 12:54:29.943237 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 12:54:29.943307 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:54:29.947304 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:54:29.951131 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 12:54:29.951816 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 12:54:29.961700 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 12:54:29.961812 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 12:54:29.964282 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 12:54:29.964436 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 12:54:29.971836 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 12:54:29.972020 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:54:29.976537 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 12:54:29.976607 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 12:54:29.979008 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 12:54:29.979054 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:54:29.981118 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 12:54:29.981184 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:54:29.984078 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 12:54:29.984139 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 12:54:29.987113 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:54:29.987185 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:54:29.998932 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 12:54:30.001208 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 12:54:30.001304 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:54:30.004925 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:54:30.005001 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:54:30.007584 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 12:54:30.007682 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 12:54:30.010265 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 12:54:30.014032 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 12:54:30.028125 systemd[1]: Switching root. Jan 30 12:54:30.061400 systemd-journald[238]: Journal stopped Jan 30 12:54:30.928716 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 30 12:54:30.929084 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 12:54:30.929106 kernel: SELinux: policy capability open_perms=1 Jan 30 12:54:30.929116 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 12:54:30.929126 kernel: SELinux: policy capability always_check_network=0 Jan 30 12:54:30.929136 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 12:54:30.929146 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 12:54:30.929155 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 12:54:30.929168 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 12:54:30.929177 kernel: audit: type=1403 audit(1738241670.218:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 12:54:30.929189 systemd[1]: Successfully loaded SELinux policy in 37.633ms. Jan 30 12:54:30.929206 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.165ms. Jan 30 12:54:30.929218 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:54:30.929229 systemd[1]: Detected virtualization kvm. Jan 30 12:54:30.929239 systemd[1]: Detected architecture arm64. Jan 30 12:54:30.929250 systemd[1]: Detected first boot. Jan 30 12:54:30.929261 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:54:30.929272 zram_generator::config[1045]: No configuration found. Jan 30 12:54:30.929283 systemd[1]: Populated /etc with preset unit settings. Jan 30 12:54:30.929298 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 12:54:30.929312 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 12:54:30.929323 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 12:54:30.929334 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 12:54:30.929345 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 12:54:30.929371 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 12:54:30.929387 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 12:54:30.929397 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 12:54:30.929408 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 12:54:30.929419 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 12:54:30.929430 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 12:54:30.929440 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:54:30.929451 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:54:30.929462 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 12:54:30.929474 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 12:54:30.929486 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 12:54:30.929503 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:54:30.929517 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 12:54:30.929527 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:54:30.929538 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 12:54:30.929549 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 12:54:30.929560 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 12:54:30.929574 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 12:54:30.929593 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:54:30.929604 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:54:30.929615 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:54:30.929625 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:54:30.929636 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 12:54:30.929647 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 12:54:30.929658 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:54:30.929669 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:54:30.929681 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:54:30.929692 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 12:54:30.929702 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 12:54:30.929713 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 12:54:30.929738 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 12:54:30.929750 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 12:54:30.929761 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 12:54:30.929772 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 12:54:30.929783 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 12:54:30.929796 systemd[1]: Reached target machines.target - Containers. Jan 30 12:54:30.929806 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 12:54:30.929817 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:54:30.929828 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:54:30.929839 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 12:54:30.929850 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:54:30.929860 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:54:30.929871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:54:30.929884 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 12:54:30.929894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:54:30.929905 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 12:54:30.929915 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 12:54:30.929926 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 12:54:30.929937 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 12:54:30.929947 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 12:54:30.929957 kernel: fuse: init (API version 7.39) Jan 30 12:54:30.929967 kernel: loop: module loaded Jan 30 12:54:30.929978 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:54:30.929989 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:54:30.930000 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 12:54:30.930011 kernel: ACPI: bus type drm_connector registered Jan 30 12:54:30.930021 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 12:54:30.930032 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:54:30.930042 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 12:54:30.930054 systemd[1]: Stopped verity-setup.service. Jan 30 12:54:30.930091 systemd-journald[1105]: Collecting audit messages is disabled. Jan 30 12:54:30.930117 systemd-journald[1105]: Journal started Jan 30 12:54:30.930145 systemd-journald[1105]: Runtime Journal (/run/log/journal/68e066d4a9b94a80a1525e22a7abd092) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:54:30.692562 systemd[1]: Queued start job for default target multi-user.target. Jan 30 12:54:30.710312 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 12:54:30.710716 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 12:54:30.931781 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:54:30.933554 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 12:54:30.935023 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 12:54:30.936491 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 12:54:30.937803 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 12:54:30.939206 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 12:54:30.940621 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 12:54:30.942098 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 12:54:30.943690 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:54:30.945423 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 12:54:30.945602 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 12:54:30.947309 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:54:30.947505 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:54:30.949134 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:54:30.949286 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:54:30.951052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:54:30.951213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:54:30.953028 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 12:54:30.953196 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 12:54:30.954860 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:54:30.955016 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:54:30.956605 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:54:30.959913 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 12:54:30.961773 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 12:54:30.976242 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 12:54:30.990940 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 12:54:30.993480 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 12:54:30.994769 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 12:54:30.994820 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:54:30.997280 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 12:54:30.999769 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 12:54:31.002255 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 12:54:31.003515 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:54:31.005582 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 12:54:31.008201 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 12:54:31.009623 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:54:31.015178 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 12:54:31.016563 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:54:31.018049 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:54:31.024828 systemd-journald[1105]: Time spent on flushing to /var/log/journal/68e066d4a9b94a80a1525e22a7abd092 is 16.654ms for 856 entries. Jan 30 12:54:31.024828 systemd-journald[1105]: System Journal (/var/log/journal/68e066d4a9b94a80a1525e22a7abd092) is 8.0M, max 195.6M, 187.6M free. Jan 30 12:54:31.056936 systemd-journald[1105]: Received client request to flush runtime journal. Jan 30 12:54:31.057009 kernel: loop0: detected capacity change from 0 to 113552 Jan 30 12:54:31.025977 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 12:54:31.031319 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 12:54:31.035043 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:54:31.036702 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 12:54:31.038258 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 12:54:31.040192 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 12:54:31.042028 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 12:54:31.046430 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 12:54:31.062007 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 12:54:31.072016 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 12:54:31.076665 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 12:54:31.078824 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 12:54:31.080856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:54:31.085851 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 12:54:31.092847 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 12:54:31.093589 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 12:54:31.106017 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:54:31.107817 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 12:54:31.111766 kernel: loop1: detected capacity change from 0 to 116784 Jan 30 12:54:31.127960 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 30 12:54:31.127976 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jan 30 12:54:31.132616 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:54:31.154755 kernel: loop2: detected capacity change from 0 to 194096 Jan 30 12:54:31.199768 kernel: loop3: detected capacity change from 0 to 113552 Jan 30 12:54:31.209816 kernel: loop4: detected capacity change from 0 to 116784 Jan 30 12:54:31.216789 kernel: loop5: detected capacity change from 0 to 194096 Jan 30 12:54:31.223849 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 12:54:31.224271 (sd-merge)[1181]: Merged extensions into '/usr'. Jan 30 12:54:31.229583 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 12:54:31.229598 systemd[1]: Reloading... Jan 30 12:54:31.285504 zram_generator::config[1204]: No configuration found. Jan 30 12:54:31.370134 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 12:54:31.399842 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:54:31.437313 systemd[1]: Reloading finished in 207 ms. Jan 30 12:54:31.463328 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 12:54:31.464918 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 12:54:31.483056 systemd[1]: Starting ensure-sysext.service... Jan 30 12:54:31.485319 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:54:31.501458 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jan 30 12:54:31.501474 systemd[1]: Reloading... Jan 30 12:54:31.505404 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 12:54:31.505609 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 12:54:31.506275 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 12:54:31.506489 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 30 12:54:31.506540 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 30 12:54:31.509078 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:54:31.509090 systemd-tmpfiles[1243]: Skipping /boot Jan 30 12:54:31.517107 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:54:31.517121 systemd-tmpfiles[1243]: Skipping /boot Jan 30 12:54:31.546758 zram_generator::config[1270]: No configuration found. Jan 30 12:54:31.629574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:54:31.665762 systemd[1]: Reloading finished in 163 ms. Jan 30 12:54:31.678401 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 12:54:31.695337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:54:31.703736 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 12:54:31.706569 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 12:54:31.709486 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 12:54:31.715356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:54:31.720948 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:54:31.725458 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 12:54:31.730290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:54:31.734041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:54:31.736944 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:54:31.743752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:54:31.745044 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:54:31.762796 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 12:54:31.765763 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 12:54:31.766249 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Jan 30 12:54:31.768505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:54:31.768690 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:54:31.770639 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:54:31.770777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:54:31.776379 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:54:31.776524 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:54:31.786872 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:54:31.800090 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:54:31.805597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:54:31.811012 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:54:31.812215 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:54:31.816008 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 12:54:31.818285 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:54:31.822540 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 12:54:31.825762 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 12:54:31.829291 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 12:54:31.831093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:54:31.831234 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:54:31.834114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:54:31.834240 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:54:31.838111 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 12:54:31.855887 systemd[1]: Finished ensure-sysext.service. Jan 30 12:54:31.864801 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1345) Jan 30 12:54:31.868553 augenrules[1373]: No rules Jan 30 12:54:31.878305 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:54:31.879762 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 12:54:31.882979 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:54:31.883128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:54:31.890545 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 12:54:31.897995 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:54:31.907037 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:54:31.913961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:54:31.918962 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:54:31.920332 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:54:31.925324 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:54:31.940961 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 12:54:31.942157 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 12:54:31.942649 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:54:31.942821 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:54:31.944430 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:54:31.944582 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:54:31.946024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:54:31.946164 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:54:31.948435 systemd-resolved[1309]: Positive Trust Anchors: Jan 30 12:54:31.948694 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:54:31.948807 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:54:31.958209 systemd-resolved[1309]: Defaulting to hostname 'linux'. Jan 30 12:54:31.966026 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:54:31.970000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:54:31.973130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:54:31.977041 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 12:54:31.978215 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:54:31.978284 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:54:31.979700 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:54:31.981211 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 12:54:31.985966 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 12:54:31.999163 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 12:54:32.010943 lvm[1399]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:54:32.022862 systemd-networkd[1385]: lo: Link UP Jan 30 12:54:32.022871 systemd-networkd[1385]: lo: Gained carrier Jan 30 12:54:32.024128 systemd-networkd[1385]: Enumeration completed Jan 30 12:54:32.024290 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:54:32.024661 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:54:32.024664 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:54:32.025256 systemd-networkd[1385]: eth0: Link UP Jan 30 12:54:32.025265 systemd-networkd[1385]: eth0: Gained carrier Jan 30 12:54:32.025279 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:54:32.026098 systemd[1]: Reached target network.target - Network. Jan 30 12:54:32.036943 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 12:54:32.038221 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 12:54:32.039772 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 12:54:32.041548 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:54:32.042975 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:54:32.043631 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Jan 30 12:54:32.043822 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 12:54:31.569792 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 12:54:31.576366 systemd-journald[1105]: Time jumped backwards, rotating. Jan 30 12:54:31.571079 systemd-timesyncd[1387]: Initial clock synchronization to Thu 2025-01-30 12:54:31.569712 UTC. Jan 30 12:54:31.572190 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 12:54:31.572231 systemd-resolved[1309]: Clock change detected. Flushing caches. Jan 30 12:54:31.578460 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:54:31.580038 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:54:31.580812 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:54:31.582102 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 12:54:31.583444 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 12:54:31.585229 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 12:54:31.586777 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 12:54:31.588059 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 12:54:31.589309 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 12:54:31.589345 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:54:31.590263 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:54:31.592250 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 12:54:31.594669 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 12:54:31.605916 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 12:54:31.607710 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 12:54:31.609168 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 12:54:31.610827 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:54:31.611842 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:54:31.613175 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:54:31.613210 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:54:31.614552 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 12:54:31.616697 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 12:54:31.619117 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 12:54:31.621256 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 12:54:31.622366 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 12:54:31.623496 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 12:54:31.628058 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 12:54:31.631182 jq[1418]: false Jan 30 12:54:31.635089 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 12:54:31.639119 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 12:54:31.646238 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 12:54:31.651376 extend-filesystems[1419]: Found loop3 Jan 30 12:54:31.652446 extend-filesystems[1419]: Found loop4 Jan 30 12:54:31.652446 extend-filesystems[1419]: Found loop5 Jan 30 12:54:31.652446 extend-filesystems[1419]: Found vda Jan 30 12:54:31.652446 extend-filesystems[1419]: Found vda1 Jan 30 12:54:31.652446 extend-filesystems[1419]: Found vda2 Jan 30 12:54:31.652446 extend-filesystems[1419]: Found vda3 Jan 30 12:54:31.652446 extend-filesystems[1419]: Found usr Jan 30 12:54:31.652446 extend-filesystems[1419]: Found vda4 Jan 30 12:54:31.652446 extend-filesystems[1419]: Found vda6 Jan 30 12:54:31.652446 extend-filesystems[1419]: Found vda7 Jan 30 12:54:31.652446 extend-filesystems[1419]: Found vda9 Jan 30 12:54:31.652446 extend-filesystems[1419]: Checking size of /dev/vda9 Jan 30 12:54:31.655794 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 12:54:31.656308 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 12:54:31.657381 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 12:54:31.661120 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 12:54:31.665969 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 12:54:31.666139 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 12:54:31.666410 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 12:54:31.666563 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 12:54:31.668732 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 12:54:31.668884 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 12:54:31.674081 jq[1435]: true Jan 30 12:54:31.674358 dbus-daemon[1417]: [system] SELinux support is enabled Jan 30 12:54:31.676337 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 12:54:31.681060 extend-filesystems[1419]: Resized partition /dev/vda9 Jan 30 12:54:31.689793 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 12:54:31.689852 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 12:54:31.693957 jq[1442]: true Jan 30 12:54:31.702248 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1347) Jan 30 12:54:31.691320 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 12:54:31.702372 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Jan 30 12:54:31.703738 tar[1438]: linux-arm64/helm Jan 30 12:54:31.691342 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 12:54:31.704468 (ntainerd)[1445]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 12:54:31.713738 systemd-logind[1427]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 12:54:31.714856 systemd-logind[1427]: New seat seat0. Jan 30 12:54:31.715906 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 12:54:31.716585 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 12:54:31.738784 update_engine[1433]: I20250130 12:54:31.738627 1433 main.cc:92] Flatcar Update Engine starting Jan 30 12:54:31.749986 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 12:54:31.750072 systemd[1]: Started update-engine.service - Update Engine. Jan 30 12:54:31.751766 update_engine[1433]: I20250130 12:54:31.750131 1433 update_check_scheduler.cc:74] Next update check in 3m9s Jan 30 12:54:31.760696 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 12:54:31.762693 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 12:54:31.762693 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 12:54:31.762693 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 12:54:31.766284 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Jan 30 12:54:31.764024 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 12:54:31.768067 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 12:54:31.816969 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Jan 30 12:54:31.817975 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 12:54:31.822392 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 12:54:31.836044 locksmithd[1464]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 12:54:31.919908 containerd[1445]: time="2025-01-30T12:54:31.916852240Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 12:54:31.946241 containerd[1445]: time="2025-01-30T12:54:31.946146400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:54:31.947826 containerd[1445]: time="2025-01-30T12:54:31.947788480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:54:31.948142 containerd[1445]: time="2025-01-30T12:54:31.948114080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 12:54:31.948216 containerd[1445]: time="2025-01-30T12:54:31.948201400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 12:54:31.948409 containerd[1445]: time="2025-01-30T12:54:31.948390760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 12:54:31.948601 containerd[1445]: time="2025-01-30T12:54:31.948582040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 12:54:31.948766 containerd[1445]: time="2025-01-30T12:54:31.948745840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:54:31.948822 containerd[1445]: time="2025-01-30T12:54:31.948809400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:54:31.949071 containerd[1445]: time="2025-01-30T12:54:31.949048760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:54:31.949272 containerd[1445]: time="2025-01-30T12:54:31.949253240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 12:54:31.949341 containerd[1445]: time="2025-01-30T12:54:31.949326680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:54:31.949437 containerd[1445]: time="2025-01-30T12:54:31.949421920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 12:54:31.949669 containerd[1445]: time="2025-01-30T12:54:31.949649840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:54:31.950094 containerd[1445]: time="2025-01-30T12:54:31.950073440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:54:31.950349 containerd[1445]: time="2025-01-30T12:54:31.950327280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:54:31.950463 containerd[1445]: time="2025-01-30T12:54:31.950446720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 12:54:31.950669 containerd[1445]: time="2025-01-30T12:54:31.950650600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 12:54:31.950778 containerd[1445]: time="2025-01-30T12:54:31.950761920Z" level=info msg="metadata content store policy set" policy=shared Jan 30 12:54:31.956405 containerd[1445]: time="2025-01-30T12:54:31.956375680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 12:54:31.956576 containerd[1445]: time="2025-01-30T12:54:31.956560880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 12:54:31.956671 containerd[1445]: time="2025-01-30T12:54:31.956657480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 12:54:31.956795 containerd[1445]: time="2025-01-30T12:54:31.956779240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 12:54:31.956855 containerd[1445]: time="2025-01-30T12:54:31.956842840Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 12:54:31.957127 containerd[1445]: time="2025-01-30T12:54:31.957107080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 12:54:31.957552 containerd[1445]: time="2025-01-30T12:54:31.957529120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 12:54:31.957857 containerd[1445]: time="2025-01-30T12:54:31.957763320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 12:54:31.957857 containerd[1445]: time="2025-01-30T12:54:31.957786440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.957975920Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958060960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958079400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958091880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958105200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958120600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958133480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958145080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958157160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958177240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958190320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958211800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958232480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958591 containerd[1445]: time="2025-01-30T12:54:31.958245320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958258800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958269800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958281920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958296000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958309360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958320440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958332200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958344280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958360960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958384320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958397080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.958829 containerd[1445]: time="2025-01-30T12:54:31.958406920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 12:54:31.959205 containerd[1445]: time="2025-01-30T12:54:31.959154880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 12:54:31.959369 containerd[1445]: time="2025-01-30T12:54:31.959185560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 12:54:31.959662 containerd[1445]: time="2025-01-30T12:54:31.959420720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 12:54:31.959662 containerd[1445]: time="2025-01-30T12:54:31.959447040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 12:54:31.959662 containerd[1445]: time="2025-01-30T12:54:31.959456880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.959662 containerd[1445]: time="2025-01-30T12:54:31.959472640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 12:54:31.959662 containerd[1445]: time="2025-01-30T12:54:31.959483520Z" level=info msg="NRI interface is disabled by configuration." Jan 30 12:54:31.959662 containerd[1445]: time="2025-01-30T12:54:31.959493520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 12:54:31.960244 containerd[1445]: time="2025-01-30T12:54:31.960188200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 12:54:31.960780 containerd[1445]: time="2025-01-30T12:54:31.960373920Z" level=info msg="Connect containerd service" Jan 30 12:54:31.960780 containerd[1445]: time="2025-01-30T12:54:31.960415800Z" level=info msg="using legacy CRI server" Jan 30 12:54:31.960780 containerd[1445]: time="2025-01-30T12:54:31.960423440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 12:54:31.962192 containerd[1445]: time="2025-01-30T12:54:31.962010120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 12:54:31.963928 containerd[1445]: time="2025-01-30T12:54:31.963101160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:54:31.963928 containerd[1445]: time="2025-01-30T12:54:31.963238840Z" level=info msg="Start subscribing containerd event" Jan 30 12:54:31.963928 containerd[1445]: time="2025-01-30T12:54:31.963278480Z" level=info msg="Start recovering state" Jan 30 12:54:31.963928 containerd[1445]: time="2025-01-30T12:54:31.963516080Z" level=info msg="Start event monitor" Jan 30 12:54:31.963928 containerd[1445]: time="2025-01-30T12:54:31.963532200Z" level=info msg="Start snapshots syncer" Jan 30 12:54:31.963928 containerd[1445]: time="2025-01-30T12:54:31.963542680Z" level=info msg="Start cni network conf syncer for default" Jan 30 12:54:31.963928 containerd[1445]: time="2025-01-30T12:54:31.963554080Z" level=info msg="Start streaming server" Jan 30 12:54:31.964763 containerd[1445]: time="2025-01-30T12:54:31.964732560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 12:54:31.964950 containerd[1445]: time="2025-01-30T12:54:31.964932160Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 12:54:31.965129 containerd[1445]: time="2025-01-30T12:54:31.965113680Z" level=info msg="containerd successfully booted in 0.049121s" Jan 30 12:54:31.965200 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 12:54:32.096245 tar[1438]: linux-arm64/LICENSE Jan 30 12:54:32.096441 tar[1438]: linux-arm64/README.md Jan 30 12:54:32.107652 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 12:54:32.611780 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 12:54:32.630719 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 12:54:32.644187 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 12:54:32.649463 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 12:54:32.649700 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 12:54:32.652409 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 12:54:32.665959 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 12:54:32.676275 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 12:54:32.678510 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 12:54:32.679824 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 12:54:33.187069 systemd-networkd[1385]: eth0: Gained IPv6LL Jan 30 12:54:33.189413 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 12:54:33.191566 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 12:54:33.215738 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 12:54:33.218763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:54:33.221155 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 12:54:33.238127 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 12:54:33.238331 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 12:54:33.240631 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 12:54:33.248758 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 12:54:33.809051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:54:33.810732 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 12:54:33.812097 systemd[1]: Startup finished in 622ms (kernel) + 4.495s (initrd) + 4.114s (userspace) = 9.232s. Jan 30 12:54:33.813849 (kubelet)[1530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:54:33.824737 agetty[1505]: failed to open credentials directory Jan 30 12:54:33.824983 agetty[1507]: failed to open credentials directory Jan 30 12:54:34.391405 kubelet[1530]: E0130 12:54:34.391344 1530 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:54:34.393915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:54:34.394064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:54:38.427690 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 12:54:38.428902 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:54972.service - OpenSSH per-connection server daemon (10.0.0.1:54972). Jan 30 12:54:38.489423 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 54972 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:54:38.491566 sshd-session[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:54:38.499684 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 12:54:38.514435 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 12:54:38.516602 systemd-logind[1427]: New session 1 of user core. Jan 30 12:54:38.524953 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 12:54:38.534205 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 12:54:38.537657 (systemd)[1549]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 12:54:38.615597 systemd[1549]: Queued start job for default target default.target. Jan 30 12:54:38.626217 systemd[1549]: Created slice app.slice - User Application Slice. Jan 30 12:54:38.626472 systemd[1549]: Reached target paths.target - Paths. Jan 30 12:54:38.626570 systemd[1549]: Reached target timers.target - Timers. Jan 30 12:54:38.627961 systemd[1549]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 12:54:38.639119 systemd[1549]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 12:54:38.639240 systemd[1549]: Reached target sockets.target - Sockets. Jan 30 12:54:38.639253 systemd[1549]: Reached target basic.target - Basic System. Jan 30 12:54:38.639294 systemd[1549]: Reached target default.target - Main User Target. Jan 30 12:54:38.639321 systemd[1549]: Startup finished in 93ms. Jan 30 12:54:38.639440 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 12:54:38.640837 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 12:54:38.703112 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:54978.service - OpenSSH per-connection server daemon (10.0.0.1:54978). Jan 30 12:54:38.748783 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 54978 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:54:38.750271 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:54:38.756155 systemd-logind[1427]: New session 2 of user core. Jan 30 12:54:38.766141 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 12:54:38.822117 sshd[1562]: Connection closed by 10.0.0.1 port 54978 Jan 30 12:54:38.823103 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Jan 30 12:54:38.833564 systemd[1]: sshd@1-10.0.0.65:22-10.0.0.1:54978.service: Deactivated successfully. Jan 30 12:54:38.837222 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 12:54:38.840164 systemd-logind[1427]: Session 2 logged out. Waiting for processes to exit. Jan 30 12:54:38.842503 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:54980.service - OpenSSH per-connection server daemon (10.0.0.1:54980). Jan 30 12:54:38.846252 systemd-logind[1427]: Removed session 2. Jan 30 12:54:38.900435 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 54980 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:54:38.901951 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:54:38.909101 systemd-logind[1427]: New session 3 of user core. Jan 30 12:54:38.922147 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 12:54:38.972953 sshd[1569]: Connection closed by 10.0.0.1 port 54980 Jan 30 12:54:38.973518 sshd-session[1567]: pam_unix(sshd:session): session closed for user core Jan 30 12:54:38.982676 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:54980.service: Deactivated successfully. Jan 30 12:54:38.985716 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 12:54:38.986994 systemd-logind[1427]: Session 3 logged out. Waiting for processes to exit. Jan 30 12:54:39.000255 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:54988.service - OpenSSH per-connection server daemon (10.0.0.1:54988). Jan 30 12:54:39.001223 systemd-logind[1427]: Removed session 3. Jan 30 12:54:39.049485 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 54988 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:54:39.050748 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:54:39.054971 systemd-logind[1427]: New session 4 of user core. Jan 30 12:54:39.067125 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 12:54:39.121391 sshd[1576]: Connection closed by 10.0.0.1 port 54988 Jan 30 12:54:39.121960 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Jan 30 12:54:39.136029 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:54988.service: Deactivated successfully. Jan 30 12:54:39.138658 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 12:54:39.140203 systemd-logind[1427]: Session 4 logged out. Waiting for processes to exit. Jan 30 12:54:39.141984 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:54990.service - OpenSSH per-connection server daemon (10.0.0.1:54990). Jan 30 12:54:39.142955 systemd-logind[1427]: Removed session 4. Jan 30 12:54:39.187238 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 54990 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:54:39.188667 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:54:39.193306 systemd-logind[1427]: New session 5 of user core. Jan 30 12:54:39.208118 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 12:54:39.274635 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 12:54:39.275372 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:54:39.290008 sudo[1584]: pam_unix(sudo:session): session closed for user root Jan 30 12:54:39.291630 sshd[1583]: Connection closed by 10.0.0.1 port 54990 Jan 30 12:54:39.292028 sshd-session[1581]: pam_unix(sshd:session): session closed for user core Jan 30 12:54:39.307615 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:54990.service: Deactivated successfully. Jan 30 12:54:39.310421 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 12:54:39.311934 systemd-logind[1427]: Session 5 logged out. Waiting for processes to exit. Jan 30 12:54:39.323256 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:54994.service - OpenSSH per-connection server daemon (10.0.0.1:54994). Jan 30 12:54:39.327578 systemd-logind[1427]: Removed session 5. Jan 30 12:54:39.362716 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 54994 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:54:39.364155 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:54:39.368452 systemd-logind[1427]: New session 6 of user core. Jan 30 12:54:39.382094 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 12:54:39.434985 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 12:54:39.435279 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:54:39.438721 sudo[1593]: pam_unix(sudo:session): session closed for user root Jan 30 12:54:39.444354 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 12:54:39.444772 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:54:39.464504 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 12:54:39.491284 augenrules[1615]: No rules Jan 30 12:54:39.492495 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:54:39.492735 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 12:54:39.493836 sudo[1592]: pam_unix(sudo:session): session closed for user root Jan 30 12:54:39.495987 sshd[1591]: Connection closed by 10.0.0.1 port 54994 Jan 30 12:54:39.495828 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Jan 30 12:54:39.505737 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:54994.service: Deactivated successfully. Jan 30 12:54:39.507352 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 12:54:39.509021 systemd-logind[1427]: Session 6 logged out. Waiting for processes to exit. Jan 30 12:54:39.526647 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:55000.service - OpenSSH per-connection server daemon (10.0.0.1:55000). Jan 30 12:54:39.528300 systemd-logind[1427]: Removed session 6. Jan 30 12:54:39.568135 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 55000 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:54:39.571127 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:54:39.578766 systemd-logind[1427]: New session 7 of user core. Jan 30 12:54:39.586107 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 12:54:39.638345 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 12:54:39.638647 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:54:40.039197 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 12:54:40.039390 (dockerd)[1646]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 12:54:40.370009 dockerd[1646]: time="2025-01-30T12:54:40.369706880Z" level=info msg="Starting up" Jan 30 12:54:40.526411 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2642885292-merged.mount: Deactivated successfully. Jan 30 12:54:40.549744 dockerd[1646]: time="2025-01-30T12:54:40.549489720Z" level=info msg="Loading containers: start." Jan 30 12:54:40.728951 kernel: Initializing XFRM netlink socket Jan 30 12:54:40.826428 systemd-networkd[1385]: docker0: Link UP Jan 30 12:54:40.887579 dockerd[1646]: time="2025-01-30T12:54:40.887507360Z" level=info msg="Loading containers: done." Jan 30 12:54:40.907675 dockerd[1646]: time="2025-01-30T12:54:40.907610240Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 12:54:40.908033 dockerd[1646]: time="2025-01-30T12:54:40.907765440Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 12:54:40.908263 dockerd[1646]: time="2025-01-30T12:54:40.908216240Z" level=info msg="Daemon has completed initialization" Jan 30 12:54:40.944147 dockerd[1646]: time="2025-01-30T12:54:40.944004360Z" level=info msg="API listen on /run/docker.sock" Jan 30 12:54:40.944414 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 12:54:41.523635 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1792845323-merged.mount: Deactivated successfully. Jan 30 12:54:41.681943 containerd[1445]: time="2025-01-30T12:54:41.681645800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 12:54:42.494257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2347528848.mount: Deactivated successfully. Jan 30 12:54:43.501905 containerd[1445]: time="2025-01-30T12:54:43.501823440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:43.504186 containerd[1445]: time="2025-01-30T12:54:43.504123000Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 30 12:54:43.505540 containerd[1445]: time="2025-01-30T12:54:43.505504000Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:43.510506 containerd[1445]: time="2025-01-30T12:54:43.510460040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:43.511572 containerd[1445]: time="2025-01-30T12:54:43.511538280Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 1.82984228s" Jan 30 12:54:43.511609 containerd[1445]: time="2025-01-30T12:54:43.511578480Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 12:54:43.531164 containerd[1445]: time="2025-01-30T12:54:43.531125320Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 12:54:44.644415 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 12:54:44.654449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:54:44.759665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:54:44.765120 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:54:44.825814 kubelet[1927]: E0130 12:54:44.825454 1927 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:54:44.828561 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:54:44.828708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:54:45.293585 containerd[1445]: time="2025-01-30T12:54:45.293536520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:45.294730 containerd[1445]: time="2025-01-30T12:54:45.294684800Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 30 12:54:45.295619 containerd[1445]: time="2025-01-30T12:54:45.295584080Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:45.299123 containerd[1445]: time="2025-01-30T12:54:45.299079760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:45.300231 containerd[1445]: time="2025-01-30T12:54:45.300138360Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.76897328s" Jan 30 12:54:45.300231 containerd[1445]: time="2025-01-30T12:54:45.300179120Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 12:54:45.320813 containerd[1445]: time="2025-01-30T12:54:45.320639920Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 12:54:46.233415 containerd[1445]: time="2025-01-30T12:54:46.233359640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:46.234902 containerd[1445]: time="2025-01-30T12:54:46.234844000Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 30 12:54:46.235643 containerd[1445]: time="2025-01-30T12:54:46.235620440Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:46.239169 containerd[1445]: time="2025-01-30T12:54:46.239124880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:46.240409 containerd[1445]: time="2025-01-30T12:54:46.240114520Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 919.4328ms" Jan 30 12:54:46.240409 containerd[1445]: time="2025-01-30T12:54:46.240156480Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 12:54:46.259391 containerd[1445]: time="2025-01-30T12:54:46.259347320Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 12:54:47.165846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961830181.mount: Deactivated successfully. Jan 30 12:54:47.368176 containerd[1445]: time="2025-01-30T12:54:47.368113400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:47.368743 containerd[1445]: time="2025-01-30T12:54:47.368699800Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 30 12:54:47.369524 containerd[1445]: time="2025-01-30T12:54:47.369491000Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:47.371653 containerd[1445]: time="2025-01-30T12:54:47.371618880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:47.372348 containerd[1445]: time="2025-01-30T12:54:47.372254160Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.11286772s" Jan 30 12:54:47.372348 containerd[1445]: time="2025-01-30T12:54:47.372283760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 12:54:47.393705 containerd[1445]: time="2025-01-30T12:54:47.393665960Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 12:54:48.029239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2270436395.mount: Deactivated successfully. Jan 30 12:54:48.825361 containerd[1445]: time="2025-01-30T12:54:48.825305240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:48.826687 containerd[1445]: time="2025-01-30T12:54:48.826450480Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 30 12:54:48.827621 containerd[1445]: time="2025-01-30T12:54:48.827584800Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:48.830685 containerd[1445]: time="2025-01-30T12:54:48.830638960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:48.832513 containerd[1445]: time="2025-01-30T12:54:48.832463440Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.4386042s" Jan 30 12:54:48.832513 containerd[1445]: time="2025-01-30T12:54:48.832505320Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 12:54:48.854006 containerd[1445]: time="2025-01-30T12:54:48.853970880Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 12:54:49.342345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount366731164.mount: Deactivated successfully. Jan 30 12:54:49.350229 containerd[1445]: time="2025-01-30T12:54:49.350173880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:49.351660 containerd[1445]: time="2025-01-30T12:54:49.351608240Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 30 12:54:49.352645 containerd[1445]: time="2025-01-30T12:54:49.352620360Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:49.356509 containerd[1445]: time="2025-01-30T12:54:49.354970960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:49.356509 containerd[1445]: time="2025-01-30T12:54:49.356379840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 502.22164ms" Jan 30 12:54:49.356509 containerd[1445]: time="2025-01-30T12:54:49.356413440Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 12:54:49.376304 containerd[1445]: time="2025-01-30T12:54:49.376217920Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 12:54:49.890438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3664314916.mount: Deactivated successfully. Jan 30 12:54:51.389267 containerd[1445]: time="2025-01-30T12:54:51.389174760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:51.390819 containerd[1445]: time="2025-01-30T12:54:51.390771480Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 30 12:54:51.391883 containerd[1445]: time="2025-01-30T12:54:51.391849120Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:51.395183 containerd[1445]: time="2025-01-30T12:54:51.395141680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:54:51.397816 containerd[1445]: time="2025-01-30T12:54:51.397654080Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.02117592s" Jan 30 12:54:51.397816 containerd[1445]: time="2025-01-30T12:54:51.397698080Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 12:54:54.916567 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 12:54:54.930125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:54:55.050413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:54:55.055443 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:54:55.104302 kubelet[2153]: E0130 12:54:55.104088 2153 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:54:55.107500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:54:55.107640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:54:56.275099 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:54:56.285188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:54:56.302602 systemd[1]: Reloading requested from client PID 2169 ('systemctl') (unit session-7.scope)... Jan 30 12:54:56.302619 systemd[1]: Reloading... Jan 30 12:54:56.372919 zram_generator::config[2208]: No configuration found. Jan 30 12:54:56.467428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:54:56.520277 systemd[1]: Reloading finished in 217 ms. Jan 30 12:54:56.561119 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 12:54:56.561202 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 12:54:56.561428 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:54:56.568570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:54:56.668323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:54:56.673270 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:54:56.713999 kubelet[2254]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:54:56.713999 kubelet[2254]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 12:54:56.713999 kubelet[2254]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:54:56.714331 kubelet[2254]: I0130 12:54:56.714105 2254 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:54:57.218508 kubelet[2254]: I0130 12:54:57.218456 2254 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 12:54:57.218508 kubelet[2254]: I0130 12:54:57.218492 2254 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:54:57.218811 kubelet[2254]: I0130 12:54:57.218719 2254 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 12:54:57.278260 kubelet[2254]: I0130 12:54:57.278207 2254 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:54:57.278807 kubelet[2254]: E0130 12:54:57.278754 2254 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:57.297158 kubelet[2254]: I0130 12:54:57.297122 2254 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:54:57.302882 kubelet[2254]: I0130 12:54:57.301367 2254 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:54:57.302882 kubelet[2254]: I0130 12:54:57.301438 2254 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 12:54:57.302882 kubelet[2254]: I0130 12:54:57.301814 2254 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:54:57.302882 kubelet[2254]: I0130 12:54:57.301824 2254 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 12:54:57.302882 kubelet[2254]: I0130 12:54:57.302371 2254 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:54:57.304085 kubelet[2254]: I0130 12:54:57.304065 2254 kubelet.go:400] "Attempting to sync node with API server" Jan 30 12:54:57.304186 kubelet[2254]: I0130 12:54:57.304175 2254 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:54:57.304957 kubelet[2254]: I0130 12:54:57.304941 2254 kubelet.go:312] "Adding apiserver pod source" Jan 30 12:54:57.305036 kubelet[2254]: I0130 12:54:57.305026 2254 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:54:57.305626 kubelet[2254]: W0130 12:54:57.305574 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:57.305729 kubelet[2254]: E0130 12:54:57.305716 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:57.305854 kubelet[2254]: W0130 12:54:57.305827 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:57.305955 kubelet[2254]: E0130 12:54:57.305943 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:57.307262 kubelet[2254]: I0130 12:54:57.307166 2254 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 12:54:57.308079 kubelet[2254]: I0130 12:54:57.308059 2254 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:54:57.308533 kubelet[2254]: W0130 12:54:57.308508 2254 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 12:54:57.309967 kubelet[2254]: I0130 12:54:57.309945 2254 server.go:1264] "Started kubelet" Jan 30 12:54:57.310703 kubelet[2254]: I0130 12:54:57.310621 2254 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:54:57.311032 kubelet[2254]: I0130 12:54:57.311007 2254 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:54:57.311091 kubelet[2254]: I0130 12:54:57.311064 2254 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:54:57.311462 kubelet[2254]: I0130 12:54:57.311442 2254 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:54:57.312429 kubelet[2254]: I0130 12:54:57.312396 2254 server.go:455] "Adding debug handlers to kubelet server" Jan 30 12:54:57.319083 kubelet[2254]: I0130 12:54:57.319056 2254 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 12:54:57.324426 kubelet[2254]: I0130 12:54:57.324285 2254 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:54:57.328951 kubelet[2254]: I0130 12:54:57.328867 2254 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:54:57.329183 kubelet[2254]: I0130 12:54:57.329144 2254 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:54:57.330551 kubelet[2254]: E0130 12:54:57.330441 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="200ms" Jan 30 12:54:57.331225 kubelet[2254]: E0130 12:54:57.322988 2254 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f799d35edfc78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 12:54:57.30991628 +0000 UTC m=+0.633131801,LastTimestamp:2025-01-30 12:54:57.30991628 +0000 UTC m=+0.633131801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 12:54:57.331225 kubelet[2254]: W0130 12:54:57.331053 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:57.331225 kubelet[2254]: E0130 12:54:57.331101 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:57.331881 kubelet[2254]: I0130 12:54:57.331861 2254 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:54:57.332995 kubelet[2254]: I0130 12:54:57.332264 2254 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:54:57.346075 kubelet[2254]: I0130 12:54:57.346032 2254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:54:57.347698 kubelet[2254]: I0130 12:54:57.347660 2254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:54:57.348335 kubelet[2254]: I0130 12:54:57.348121 2254 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 12:54:57.348335 kubelet[2254]: I0130 12:54:57.348153 2254 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 12:54:57.348335 kubelet[2254]: E0130 12:54:57.348201 2254 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:54:57.349091 kubelet[2254]: W0130 12:54:57.349037 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:57.349387 kubelet[2254]: E0130 12:54:57.349195 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:57.351866 kubelet[2254]: I0130 12:54:57.351818 2254 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 12:54:57.351866 kubelet[2254]: I0130 12:54:57.351844 2254 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 12:54:57.351866 kubelet[2254]: I0130 12:54:57.351864 2254 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:54:57.421084 kubelet[2254]: I0130 12:54:57.421027 2254 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:54:57.421440 kubelet[2254]: E0130 12:54:57.421398 2254 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 30 12:54:57.448967 kubelet[2254]: E0130 12:54:57.448923 2254 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 12:54:57.531854 kubelet[2254]: E0130 12:54:57.531741 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="400ms" Jan 30 12:54:57.571017 kubelet[2254]: I0130 12:54:57.570987 2254 policy_none.go:49] "None policy: Start" Jan 30 12:54:57.572369 kubelet[2254]: I0130 12:54:57.572334 2254 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 12:54:57.572369 kubelet[2254]: I0130 12:54:57.572377 2254 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:54:57.582203 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 12:54:57.598217 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 12:54:57.601312 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 12:54:57.619967 kubelet[2254]: I0130 12:54:57.619933 2254 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:54:57.620397 kubelet[2254]: I0130 12:54:57.620160 2254 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:54:57.620397 kubelet[2254]: I0130 12:54:57.620279 2254 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:54:57.621863 kubelet[2254]: E0130 12:54:57.621814 2254 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 12:54:57.623266 kubelet[2254]: I0130 12:54:57.622796 2254 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:54:57.623266 kubelet[2254]: E0130 12:54:57.623143 2254 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 30 12:54:57.650663 kubelet[2254]: I0130 12:54:57.649524 2254 topology_manager.go:215] "Topology Admit Handler" podUID="6fc5c1b5a8b5a8a069728adaae460700" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 12:54:57.651582 kubelet[2254]: I0130 12:54:57.651377 2254 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 12:54:57.652806 kubelet[2254]: I0130 12:54:57.652754 2254 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 12:54:57.659960 systemd[1]: Created slice kubepods-burstable-pod6fc5c1b5a8b5a8a069728adaae460700.slice - libcontainer container kubepods-burstable-pod6fc5c1b5a8b5a8a069728adaae460700.slice. Jan 30 12:54:57.683521 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 30 12:54:57.688232 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 30 12:54:57.734873 kubelet[2254]: I0130 12:54:57.734784 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fc5c1b5a8b5a8a069728adaae460700-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6fc5c1b5a8b5a8a069728adaae460700\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:54:57.734873 kubelet[2254]: I0130 12:54:57.734831 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fc5c1b5a8b5a8a069728adaae460700-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6fc5c1b5a8b5a8a069728adaae460700\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:54:57.734873 kubelet[2254]: I0130 12:54:57.734865 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fc5c1b5a8b5a8a069728adaae460700-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6fc5c1b5a8b5a8a069728adaae460700\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:54:57.734873 kubelet[2254]: I0130 12:54:57.734899 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:54:57.734873 kubelet[2254]: I0130 12:54:57.734927 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:54:57.735486 kubelet[2254]: I0130 12:54:57.734945 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 12:54:57.735486 kubelet[2254]: I0130 12:54:57.734980 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:54:57.735486 kubelet[2254]: I0130 12:54:57.735024 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:54:57.735486 kubelet[2254]: I0130 12:54:57.735053 2254 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:54:57.932348 kubelet[2254]: E0130 12:54:57.932274 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="800ms" Jan 30 12:54:57.981866 kubelet[2254]: E0130 12:54:57.980848 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:54:57.982011 containerd[1445]: time="2025-01-30T12:54:57.981694240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6fc5c1b5a8b5a8a069728adaae460700,Namespace:kube-system,Attempt:0,}" Jan 30 12:54:57.987338 kubelet[2254]: E0130 12:54:57.987267 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:54:57.987869 containerd[1445]: time="2025-01-30T12:54:57.987798360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 30 12:54:57.991447 kubelet[2254]: E0130 12:54:57.991367 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:54:57.991867 containerd[1445]: time="2025-01-30T12:54:57.991819600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 30 12:54:58.025135 kubelet[2254]: I0130 12:54:58.025097 2254 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:54:58.025835 kubelet[2254]: E0130 12:54:58.025559 2254 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 30 12:54:58.169329 kubelet[2254]: W0130 12:54:58.169271 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:58.169329 kubelet[2254]: E0130 12:54:58.169328 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:58.382872 kubelet[2254]: W0130 12:54:58.382733 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:58.382872 kubelet[2254]: E0130 12:54:58.382794 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:58.444877 kubelet[2254]: W0130 12:54:58.444818 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:58.445448 kubelet[2254]: E0130 12:54:58.445223 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:58.449083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3140579210.mount: Deactivated successfully. Jan 30 12:54:58.459817 containerd[1445]: time="2025-01-30T12:54:58.459761480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:54:58.461575 containerd[1445]: time="2025-01-30T12:54:58.461507960Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 12:54:58.465973 containerd[1445]: time="2025-01-30T12:54:58.465608720Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:54:58.466974 containerd[1445]: time="2025-01-30T12:54:58.466940480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:54:58.468483 containerd[1445]: time="2025-01-30T12:54:58.468436880Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:54:58.469578 containerd[1445]: time="2025-01-30T12:54:58.469521840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:54:58.470172 containerd[1445]: time="2025-01-30T12:54:58.470140240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 488.3654ms" Jan 30 12:54:58.471878 containerd[1445]: time="2025-01-30T12:54:58.471488280Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:54:58.475903 containerd[1445]: time="2025-01-30T12:54:58.475838800Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:54:58.476846 containerd[1445]: time="2025-01-30T12:54:58.476815880Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 488.94332ms" Jan 30 12:54:58.494672 containerd[1445]: time="2025-01-30T12:54:58.494428720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.50936ms" Jan 30 12:54:58.542861 kubelet[2254]: W0130 12:54:58.542739 2254 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:58.542861 kubelet[2254]: E0130 12:54:58.542868 2254 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jan 30 12:54:58.660374 containerd[1445]: time="2025-01-30T12:54:58.660077880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:54:58.660374 containerd[1445]: time="2025-01-30T12:54:58.660148320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:54:58.660374 containerd[1445]: time="2025-01-30T12:54:58.660160000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:54:58.660374 containerd[1445]: time="2025-01-30T12:54:58.660246400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:54:58.664577 containerd[1445]: time="2025-01-30T12:54:58.664300480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:54:58.664577 containerd[1445]: time="2025-01-30T12:54:58.664406880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:54:58.664577 containerd[1445]: time="2025-01-30T12:54:58.664423320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:54:58.664577 containerd[1445]: time="2025-01-30T12:54:58.664550320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:54:58.665835 containerd[1445]: time="2025-01-30T12:54:58.665734040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:54:58.665835 containerd[1445]: time="2025-01-30T12:54:58.665803320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:54:58.665835 containerd[1445]: time="2025-01-30T12:54:58.665816000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:54:58.665991 containerd[1445]: time="2025-01-30T12:54:58.665914520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:54:58.686119 systemd[1]: Started cri-containerd-46e0632bf295bc91146a379963f1f757a396eed104743c04db373bb8ccc8826f.scope - libcontainer container 46e0632bf295bc91146a379963f1f757a396eed104743c04db373bb8ccc8826f. Jan 30 12:54:58.687995 systemd[1]: Started cri-containerd-df3666eea9edbe747ae93004c766603b5c61905d9ff703e65098c046e87c1301.scope - libcontainer container df3666eea9edbe747ae93004c766603b5c61905d9ff703e65098c046e87c1301. Jan 30 12:54:58.693232 systemd[1]: Started cri-containerd-ffda86d0e616f65a74d5741dce620f66bddc8bbe448f9f5ad6aef0e117438997.scope - libcontainer container ffda86d0e616f65a74d5741dce620f66bddc8bbe448f9f5ad6aef0e117438997. Jan 30 12:54:58.729781 containerd[1445]: time="2025-01-30T12:54:58.729728160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6fc5c1b5a8b5a8a069728adaae460700,Namespace:kube-system,Attempt:0,} returns sandbox id \"46e0632bf295bc91146a379963f1f757a396eed104743c04db373bb8ccc8826f\"" Jan 30 12:54:58.731617 kubelet[2254]: E0130 12:54:58.731584 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:54:58.732907 kubelet[2254]: E0130 12:54:58.732843 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="1.6s" Jan 30 12:54:58.735596 containerd[1445]: time="2025-01-30T12:54:58.735318440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"df3666eea9edbe747ae93004c766603b5c61905d9ff703e65098c046e87c1301\"" Jan 30 12:54:58.736321 containerd[1445]: time="2025-01-30T12:54:58.735980680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffda86d0e616f65a74d5741dce620f66bddc8bbe448f9f5ad6aef0e117438997\"" Jan 30 12:54:58.736396 kubelet[2254]: E0130 12:54:58.736063 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:54:58.737011 containerd[1445]: time="2025-01-30T12:54:58.736976800Z" level=info msg="CreateContainer within sandbox \"46e0632bf295bc91146a379963f1f757a396eed104743c04db373bb8ccc8826f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 12:54:58.737944 kubelet[2254]: E0130 12:54:58.737750 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:54:58.739102 containerd[1445]: time="2025-01-30T12:54:58.739003720Z" level=info msg="CreateContainer within sandbox \"df3666eea9edbe747ae93004c766603b5c61905d9ff703e65098c046e87c1301\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 12:54:58.741108 containerd[1445]: time="2025-01-30T12:54:58.741055240Z" level=info msg="CreateContainer within sandbox \"ffda86d0e616f65a74d5741dce620f66bddc8bbe448f9f5ad6aef0e117438997\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 12:54:58.768632 containerd[1445]: time="2025-01-30T12:54:58.768584440Z" level=info msg="CreateContainer within sandbox \"46e0632bf295bc91146a379963f1f757a396eed104743c04db373bb8ccc8826f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"73ca23ebbdc3eff73dd432b6d127b6ddb0c8ffcb2fccf94a4d9c132df307f64b\"" Jan 30 12:54:58.769476 containerd[1445]: time="2025-01-30T12:54:58.769448000Z" level=info msg="StartContainer for \"73ca23ebbdc3eff73dd432b6d127b6ddb0c8ffcb2fccf94a4d9c132df307f64b\"" Jan 30 12:54:58.769764 containerd[1445]: time="2025-01-30T12:54:58.769682920Z" level=info msg="CreateContainer within sandbox \"df3666eea9edbe747ae93004c766603b5c61905d9ff703e65098c046e87c1301\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d7421fd8a35059edac1e1b2b951b605e99f66fbf8ec124a56c2533d1259ffa2f\"" Jan 30 12:54:58.770199 containerd[1445]: time="2025-01-30T12:54:58.770176840Z" level=info msg="StartContainer for \"d7421fd8a35059edac1e1b2b951b605e99f66fbf8ec124a56c2533d1259ffa2f\"" Jan 30 12:54:58.771408 containerd[1445]: time="2025-01-30T12:54:58.770736160Z" level=info msg="CreateContainer within sandbox \"ffda86d0e616f65a74d5741dce620f66bddc8bbe448f9f5ad6aef0e117438997\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ae6d86c937fe21a2ac9ffe208bc3f12bc89fe3293e0a3e59c3d879b3fbbe1e6\"" Jan 30 12:54:58.771408 containerd[1445]: time="2025-01-30T12:54:58.771219040Z" level=info msg="StartContainer for \"2ae6d86c937fe21a2ac9ffe208bc3f12bc89fe3293e0a3e59c3d879b3fbbe1e6\"" Jan 30 12:54:58.802117 systemd[1]: Started cri-containerd-73ca23ebbdc3eff73dd432b6d127b6ddb0c8ffcb2fccf94a4d9c132df307f64b.scope - libcontainer container 73ca23ebbdc3eff73dd432b6d127b6ddb0c8ffcb2fccf94a4d9c132df307f64b. Jan 30 12:54:58.807039 systemd[1]: Started cri-containerd-2ae6d86c937fe21a2ac9ffe208bc3f12bc89fe3293e0a3e59c3d879b3fbbe1e6.scope - libcontainer container 2ae6d86c937fe21a2ac9ffe208bc3f12bc89fe3293e0a3e59c3d879b3fbbe1e6. Jan 30 12:54:58.808789 systemd[1]: Started cri-containerd-d7421fd8a35059edac1e1b2b951b605e99f66fbf8ec124a56c2533d1259ffa2f.scope - libcontainer container d7421fd8a35059edac1e1b2b951b605e99f66fbf8ec124a56c2533d1259ffa2f. Jan 30 12:54:58.827370 kubelet[2254]: I0130 12:54:58.827318 2254 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:54:58.829201 kubelet[2254]: E0130 12:54:58.828104 2254 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jan 30 12:54:58.888642 containerd[1445]: time="2025-01-30T12:54:58.888562920Z" level=info msg="StartContainer for \"73ca23ebbdc3eff73dd432b6d127b6ddb0c8ffcb2fccf94a4d9c132df307f64b\" returns successfully" Jan 30 12:54:58.888773 containerd[1445]: time="2025-01-30T12:54:58.888755600Z" level=info msg="StartContainer for \"2ae6d86c937fe21a2ac9ffe208bc3f12bc89fe3293e0a3e59c3d879b3fbbe1e6\" returns successfully" Jan 30 12:54:58.888843 containerd[1445]: time="2025-01-30T12:54:58.888785040Z" level=info msg="StartContainer for \"d7421fd8a35059edac1e1b2b951b605e99f66fbf8ec124a56c2533d1259ffa2f\" returns successfully" Jan 30 12:54:59.357812 kubelet[2254]: E0130 12:54:59.357771 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:54:59.360697 kubelet[2254]: E0130 12:54:59.360663 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:54:59.360815 kubelet[2254]: E0130 12:54:59.360777 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:00.364364 kubelet[2254]: E0130 12:55:00.364318 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:00.430921 kubelet[2254]: I0130 12:55:00.430238 2254 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:55:00.820714 kubelet[2254]: E0130 12:55:00.820599 2254 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 12:55:00.914754 kubelet[2254]: I0130 12:55:00.914718 2254 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 12:55:00.939029 kubelet[2254]: E0130 12:55:00.938919 2254 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f799d35edfc78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 12:54:57.30991628 +0000 UTC m=+0.633131801,LastTimestamp:2025-01-30 12:54:57.30991628 +0000 UTC m=+0.633131801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 12:55:00.992843 kubelet[2254]: E0130 12:55:00.992564 2254 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f799d385c50e0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 12:54:57.35070128 +0000 UTC m=+0.673916761,LastTimestamp:2025-01-30 12:54:57.35070128 +0000 UTC m=+0.673916761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 12:55:01.307960 kubelet[2254]: I0130 12:55:01.307918 2254 apiserver.go:52] "Watching apiserver" Jan 30 12:55:01.324819 kubelet[2254]: I0130 12:55:01.324765 2254 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:55:01.369793 kubelet[2254]: E0130 12:55:01.369734 2254 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 12:55:01.370291 kubelet[2254]: E0130 12:55:01.370248 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:02.532227 kubelet[2254]: E0130 12:55:02.532147 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:02.772527 kubelet[2254]: E0130 12:55:02.772452 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:03.145840 systemd[1]: Reloading requested from client PID 2532 ('systemctl') (unit session-7.scope)... Jan 30 12:55:03.145859 systemd[1]: Reloading... Jan 30 12:55:03.221047 zram_generator::config[2575]: No configuration found. Jan 30 12:55:03.305596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:55:03.368252 kubelet[2254]: E0130 12:55:03.368132 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:03.368252 kubelet[2254]: E0130 12:55:03.368171 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:03.372630 systemd[1]: Reloading finished in 226 ms. Jan 30 12:55:03.405734 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:03.416044 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 12:55:03.416299 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:03.416370 systemd[1]: kubelet.service: Consumed 1.092s CPU time, 114.4M memory peak, 0B memory swap peak. Jan 30 12:55:03.431283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:55:03.544852 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:55:03.550747 (kubelet)[2613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:55:03.606773 kubelet[2613]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:55:03.606773 kubelet[2613]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 12:55:03.606773 kubelet[2613]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:55:03.607183 kubelet[2613]: I0130 12:55:03.606812 2613 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:55:03.611946 kubelet[2613]: I0130 12:55:03.611325 2613 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 12:55:03.611946 kubelet[2613]: I0130 12:55:03.611354 2613 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:55:03.611946 kubelet[2613]: I0130 12:55:03.611530 2613 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 12:55:03.613334 kubelet[2613]: I0130 12:55:03.613313 2613 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 12:55:03.615164 kubelet[2613]: I0130 12:55:03.615133 2613 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:55:03.622599 kubelet[2613]: I0130 12:55:03.622562 2613 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:55:03.622988 kubelet[2613]: I0130 12:55:03.622950 2613 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:55:03.623352 kubelet[2613]: I0130 12:55:03.623055 2613 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 12:55:03.623850 kubelet[2613]: I0130 12:55:03.623830 2613 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:55:03.623946 kubelet[2613]: I0130 12:55:03.623936 2613 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 12:55:03.624103 kubelet[2613]: I0130 12:55:03.624090 2613 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:55:03.624293 kubelet[2613]: I0130 12:55:03.624279 2613 kubelet.go:400] "Attempting to sync node with API server" Jan 30 12:55:03.624370 kubelet[2613]: I0130 12:55:03.624356 2613 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:55:03.624442 kubelet[2613]: I0130 12:55:03.624433 2613 kubelet.go:312] "Adding apiserver pod source" Jan 30 12:55:03.624552 kubelet[2613]: I0130 12:55:03.624511 2613 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:55:03.626270 kubelet[2613]: I0130 12:55:03.626236 2613 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 12:55:03.626489 kubelet[2613]: I0130 12:55:03.626474 2613 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:55:03.627099 kubelet[2613]: I0130 12:55:03.627079 2613 server.go:1264] "Started kubelet" Jan 30 12:55:03.628806 kubelet[2613]: I0130 12:55:03.628778 2613 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:55:03.630627 kubelet[2613]: I0130 12:55:03.630000 2613 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:55:03.632289 kubelet[2613]: I0130 12:55:03.632258 2613 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 12:55:03.632630 kubelet[2613]: I0130 12:55:03.632399 2613 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:55:03.632630 kubelet[2613]: I0130 12:55:03.632609 2613 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:55:03.633112 kubelet[2613]: I0130 12:55:03.633091 2613 server.go:455] "Adding debug handlers to kubelet server" Jan 30 12:55:03.633112 kubelet[2613]: I0130 12:55:03.633984 2613 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:55:03.633112 kubelet[2613]: I0130 12:55:03.634190 2613 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:55:03.648401 kubelet[2613]: E0130 12:55:03.648206 2613 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:55:03.651096 kubelet[2613]: I0130 12:55:03.651054 2613 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:55:03.651212 kubelet[2613]: I0130 12:55:03.651177 2613 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:55:03.652768 kubelet[2613]: I0130 12:55:03.652723 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:55:03.656389 kubelet[2613]: I0130 12:55:03.656285 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:55:03.656463 kubelet[2613]: I0130 12:55:03.656334 2613 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 12:55:03.656463 kubelet[2613]: I0130 12:55:03.656433 2613 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 12:55:03.656507 kubelet[2613]: E0130 12:55:03.656481 2613 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 12:55:03.658472 kubelet[2613]: I0130 12:55:03.657500 2613 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:55:03.698829 kubelet[2613]: I0130 12:55:03.698800 2613 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 12:55:03.698829 kubelet[2613]: I0130 12:55:03.698821 2613 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 12:55:03.698829 kubelet[2613]: I0130 12:55:03.698843 2613 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:55:03.699076 kubelet[2613]: I0130 12:55:03.699057 2613 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 12:55:03.699104 kubelet[2613]: I0130 12:55:03.699074 2613 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 12:55:03.699104 kubelet[2613]: I0130 12:55:03.699094 2613 policy_none.go:49] "None policy: Start" Jan 30 12:55:03.699772 kubelet[2613]: I0130 12:55:03.699754 2613 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 12:55:03.699817 kubelet[2613]: I0130 12:55:03.699781 2613 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:55:03.700036 kubelet[2613]: I0130 12:55:03.700004 2613 state_mem.go:75] "Updated machine memory state" Jan 30 12:55:03.704615 kubelet[2613]: I0130 12:55:03.704579 2613 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:55:03.705208 kubelet[2613]: I0130 12:55:03.704755 2613 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:55:03.705208 kubelet[2613]: I0130 12:55:03.705138 2613 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:55:03.738291 kubelet[2613]: I0130 12:55:03.738260 2613 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 12:55:03.746531 kubelet[2613]: I0130 12:55:03.746488 2613 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 30 12:55:03.746648 kubelet[2613]: I0130 12:55:03.746587 2613 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 12:55:03.756633 kubelet[2613]: I0130 12:55:03.756596 2613 topology_manager.go:215] "Topology Admit Handler" podUID="6fc5c1b5a8b5a8a069728adaae460700" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 12:55:03.756766 kubelet[2613]: I0130 12:55:03.756708 2613 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 12:55:03.756766 kubelet[2613]: I0130 12:55:03.756749 2613 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 12:55:03.762745 kubelet[2613]: E0130 12:55:03.762702 2613 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 12:55:03.762865 kubelet[2613]: E0130 12:55:03.762784 2613 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 30 12:55:03.933981 kubelet[2613]: I0130 12:55:03.933866 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6fc5c1b5a8b5a8a069728adaae460700-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6fc5c1b5a8b5a8a069728adaae460700\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:55:03.933981 kubelet[2613]: I0130 12:55:03.933920 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:55:03.933981 kubelet[2613]: I0130 12:55:03.933946 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:55:03.933981 kubelet[2613]: I0130 12:55:03.933965 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 12:55:03.933981 kubelet[2613]: I0130 12:55:03.933990 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6fc5c1b5a8b5a8a069728adaae460700-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6fc5c1b5a8b5a8a069728adaae460700\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:55:03.934313 kubelet[2613]: I0130 12:55:03.934028 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6fc5c1b5a8b5a8a069728adaae460700-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6fc5c1b5a8b5a8a069728adaae460700\") " pod="kube-system/kube-apiserver-localhost" Jan 30 12:55:03.934313 kubelet[2613]: I0130 12:55:03.934048 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:55:03.934313 kubelet[2613]: I0130 12:55:03.934070 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:55:03.934313 kubelet[2613]: I0130 12:55:03.934091 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 12:55:04.063970 kubelet[2613]: E0130 12:55:04.063654 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:04.063970 kubelet[2613]: E0130 12:55:04.063769 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:04.063970 kubelet[2613]: E0130 12:55:04.063811 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:04.625288 kubelet[2613]: I0130 12:55:04.625202 2613 apiserver.go:52] "Watching apiserver" Jan 30 12:55:04.632617 kubelet[2613]: I0130 12:55:04.632558 2613 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:55:04.681281 kubelet[2613]: E0130 12:55:04.681072 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:04.686066 kubelet[2613]: E0130 12:55:04.681650 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:04.695820 kubelet[2613]: E0130 12:55:04.695782 2613 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 12:55:04.696998 kubelet[2613]: E0130 12:55:04.696905 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:04.733922 kubelet[2613]: I0130 12:55:04.731862 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.731844074 podStartE2EDuration="1.731844074s" podCreationTimestamp="2025-01-30 12:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:55:04.73154775 +0000 UTC m=+1.176490031" watchObservedRunningTime="2025-01-30 12:55:04.731844074 +0000 UTC m=+1.176786315" Jan 30 12:55:04.733922 kubelet[2613]: I0130 12:55:04.732028 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.732023156 podStartE2EDuration="2.732023156s" podCreationTimestamp="2025-01-30 12:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:55:04.706363366 +0000 UTC m=+1.151305607" watchObservedRunningTime="2025-01-30 12:55:04.732023156 +0000 UTC m=+1.176965397" Jan 30 12:55:04.749616 kubelet[2613]: I0130 12:55:04.749545 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.749524835 podStartE2EDuration="2.749524835s" podCreationTimestamp="2025-01-30 12:55:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:55:04.74917303 +0000 UTC m=+1.194115311" watchObservedRunningTime="2025-01-30 12:55:04.749524835 +0000 UTC m=+1.194467076" Jan 30 12:55:05.683378 kubelet[2613]: E0130 12:55:05.683325 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:07.352221 kubelet[2613]: E0130 12:55:07.352178 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:08.738874 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 30 12:55:08.740311 sshd[1625]: Connection closed by 10.0.0.1 port 55000 Jan 30 12:55:08.741056 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:08.744033 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 12:55:08.744237 systemd[1]: session-7.scope: Consumed 7.277s CPU time, 192.7M memory peak, 0B memory swap peak. Jan 30 12:55:08.744791 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:55000.service: Deactivated successfully. Jan 30 12:55:08.748994 systemd-logind[1427]: Session 7 logged out. Waiting for processes to exit. Jan 30 12:55:08.750342 systemd-logind[1427]: Removed session 7. Jan 30 12:55:08.813312 kubelet[2613]: E0130 12:55:08.813269 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:08.841338 kubelet[2613]: E0130 12:55:08.841228 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:09.691440 kubelet[2613]: E0130 12:55:09.688839 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:09.691440 kubelet[2613]: E0130 12:55:09.688867 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:16.581464 update_engine[1433]: I20250130 12:55:16.581374 1433 update_attempter.cc:509] Updating boot flags... Jan 30 12:55:16.614244 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2711) Jan 30 12:55:17.361056 kubelet[2613]: E0130 12:55:17.360287 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:19.516920 kubelet[2613]: I0130 12:55:19.516847 2613 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 12:55:19.525290 containerd[1445]: time="2025-01-30T12:55:19.525211999Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 12:55:19.525669 kubelet[2613]: I0130 12:55:19.525557 2613 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 12:55:19.582054 kubelet[2613]: I0130 12:55:19.581987 2613 topology_manager.go:215] "Topology Admit Handler" podUID="d55e4427-8044-4810-8ca0-520007264e34" podNamespace="kube-system" podName="kube-proxy-b2jth" Jan 30 12:55:19.593243 systemd[1]: Created slice kubepods-besteffort-podd55e4427_8044_4810_8ca0_520007264e34.slice - libcontainer container kubepods-besteffort-podd55e4427_8044_4810_8ca0_520007264e34.slice. Jan 30 12:55:19.657568 kubelet[2613]: I0130 12:55:19.655506 2613 topology_manager.go:215] "Topology Admit Handler" podUID="0c3f3ab8-e60b-4ea3-a2c6-c659b24e2a59" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-pchfw" Jan 30 12:55:19.657568 kubelet[2613]: I0130 12:55:19.656183 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d55e4427-8044-4810-8ca0-520007264e34-lib-modules\") pod \"kube-proxy-b2jth\" (UID: \"d55e4427-8044-4810-8ca0-520007264e34\") " pod="kube-system/kube-proxy-b2jth" Jan 30 12:55:19.657568 kubelet[2613]: I0130 12:55:19.656218 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4m5d\" (UniqueName: \"kubernetes.io/projected/d55e4427-8044-4810-8ca0-520007264e34-kube-api-access-q4m5d\") pod \"kube-proxy-b2jth\" (UID: \"d55e4427-8044-4810-8ca0-520007264e34\") " pod="kube-system/kube-proxy-b2jth" Jan 30 12:55:19.657568 kubelet[2613]: I0130 12:55:19.656257 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d55e4427-8044-4810-8ca0-520007264e34-kube-proxy\") pod \"kube-proxy-b2jth\" (UID: \"d55e4427-8044-4810-8ca0-520007264e34\") " pod="kube-system/kube-proxy-b2jth" Jan 30 12:55:19.657568 kubelet[2613]: I0130 12:55:19.656276 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d55e4427-8044-4810-8ca0-520007264e34-xtables-lock\") pod \"kube-proxy-b2jth\" (UID: \"d55e4427-8044-4810-8ca0-520007264e34\") " pod="kube-system/kube-proxy-b2jth" Jan 30 12:55:19.667586 systemd[1]: Created slice kubepods-besteffort-pod0c3f3ab8_e60b_4ea3_a2c6_c659b24e2a59.slice - libcontainer container kubepods-besteffort-pod0c3f3ab8_e60b_4ea3_a2c6_c659b24e2a59.slice. Jan 30 12:55:19.756726 kubelet[2613]: I0130 12:55:19.756591 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0c3f3ab8-e60b-4ea3-a2c6-c659b24e2a59-var-lib-calico\") pod \"tigera-operator-7bc55997bb-pchfw\" (UID: \"0c3f3ab8-e60b-4ea3-a2c6-c659b24e2a59\") " pod="tigera-operator/tigera-operator-7bc55997bb-pchfw" Jan 30 12:55:19.756726 kubelet[2613]: I0130 12:55:19.756645 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwcb6\" (UniqueName: \"kubernetes.io/projected/0c3f3ab8-e60b-4ea3-a2c6-c659b24e2a59-kube-api-access-lwcb6\") pod \"tigera-operator-7bc55997bb-pchfw\" (UID: \"0c3f3ab8-e60b-4ea3-a2c6-c659b24e2a59\") " pod="tigera-operator/tigera-operator-7bc55997bb-pchfw" Jan 30 12:55:19.915995 kubelet[2613]: E0130 12:55:19.915731 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:19.918794 containerd[1445]: time="2025-01-30T12:55:19.918699397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2jth,Uid:d55e4427-8044-4810-8ca0-520007264e34,Namespace:kube-system,Attempt:0,}" Jan 30 12:55:19.957096 containerd[1445]: time="2025-01-30T12:55:19.956982595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:55:19.957096 containerd[1445]: time="2025-01-30T12:55:19.957061835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:55:19.958215 containerd[1445]: time="2025-01-30T12:55:19.957077275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:19.958215 containerd[1445]: time="2025-01-30T12:55:19.957176756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:19.972487 containerd[1445]: time="2025-01-30T12:55:19.972369115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-pchfw,Uid:0c3f3ab8-e60b-4ea3-a2c6-c659b24e2a59,Namespace:tigera-operator,Attempt:0,}" Jan 30 12:55:19.979114 systemd[1]: Started cri-containerd-249b40143de30c98551770acb6881e4a0aa6ab401da22ed1ec9b9b3a8aaf2c07.scope - libcontainer container 249b40143de30c98551770acb6881e4a0aa6ab401da22ed1ec9b9b3a8aaf2c07. Jan 30 12:55:20.001948 containerd[1445]: time="2025-01-30T12:55:20.001820027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:55:20.001948 containerd[1445]: time="2025-01-30T12:55:20.001900187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:55:20.001948 containerd[1445]: time="2025-01-30T12:55:20.001915947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:20.002126 containerd[1445]: time="2025-01-30T12:55:20.002003188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:20.004977 containerd[1445]: time="2025-01-30T12:55:20.004938362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2jth,Uid:d55e4427-8044-4810-8ca0-520007264e34,Namespace:kube-system,Attempt:0,} returns sandbox id \"249b40143de30c98551770acb6881e4a0aa6ab401da22ed1ec9b9b3a8aaf2c07\"" Jan 30 12:55:20.007640 kubelet[2613]: E0130 12:55:20.007615 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:20.014543 containerd[1445]: time="2025-01-30T12:55:20.014465728Z" level=info msg="CreateContainer within sandbox \"249b40143de30c98551770acb6881e4a0aa6ab401da22ed1ec9b9b3a8aaf2c07\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 12:55:20.035100 systemd[1]: Started cri-containerd-1030076eaa893ab3065db4d1c5605c6225774aac01f4949f3043c265b70432b4.scope - libcontainer container 1030076eaa893ab3065db4d1c5605c6225774aac01f4949f3043c265b70432b4. Jan 30 12:55:20.059457 containerd[1445]: time="2025-01-30T12:55:20.059396786Z" level=info msg="CreateContainer within sandbox \"249b40143de30c98551770acb6881e4a0aa6ab401da22ed1ec9b9b3a8aaf2c07\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9931f652a3380d69c0b929868a680808ad629a2b80c98f643206a857cafbfdfe\"" Jan 30 12:55:20.063654 containerd[1445]: time="2025-01-30T12:55:20.063440926Z" level=info msg="StartContainer for \"9931f652a3380d69c0b929868a680808ad629a2b80c98f643206a857cafbfdfe\"" Jan 30 12:55:20.070128 containerd[1445]: time="2025-01-30T12:55:20.070081678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-pchfw,Uid:0c3f3ab8-e60b-4ea3-a2c6-c659b24e2a59,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1030076eaa893ab3065db4d1c5605c6225774aac01f4949f3043c265b70432b4\"" Jan 30 12:55:20.074104 containerd[1445]: time="2025-01-30T12:55:20.074062378Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 12:55:20.102088 systemd[1]: Started cri-containerd-9931f652a3380d69c0b929868a680808ad629a2b80c98f643206a857cafbfdfe.scope - libcontainer container 9931f652a3380d69c0b929868a680808ad629a2b80c98f643206a857cafbfdfe. Jan 30 12:55:20.147614 containerd[1445]: time="2025-01-30T12:55:20.147558854Z" level=info msg="StartContainer for \"9931f652a3380d69c0b929868a680808ad629a2b80c98f643206a857cafbfdfe\" returns successfully" Jan 30 12:55:20.737619 kubelet[2613]: E0130 12:55:20.737471 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:21.292960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3348171602.mount: Deactivated successfully. Jan 30 12:55:22.521915 containerd[1445]: time="2025-01-30T12:55:22.521855252Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:22.522993 containerd[1445]: time="2025-01-30T12:55:22.522744695Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 12:55:22.523825 containerd[1445]: time="2025-01-30T12:55:22.523631579Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:22.526256 containerd[1445]: time="2025-01-30T12:55:22.526216270Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:22.526952 containerd[1445]: time="2025-01-30T12:55:22.526918273Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.452811415s" Jan 30 12:55:22.527024 containerd[1445]: time="2025-01-30T12:55:22.526980393Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 12:55:22.533973 containerd[1445]: time="2025-01-30T12:55:22.533933103Z" level=info msg="CreateContainer within sandbox \"1030076eaa893ab3065db4d1c5605c6225774aac01f4949f3043c265b70432b4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 12:55:22.545067 containerd[1445]: time="2025-01-30T12:55:22.545016350Z" level=info msg="CreateContainer within sandbox \"1030076eaa893ab3065db4d1c5605c6225774aac01f4949f3043c265b70432b4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"53e09c4ba1823a1fe18b121209e726152b9433908e944d49548cfbe3f8f00e48\"" Jan 30 12:55:22.546002 containerd[1445]: time="2025-01-30T12:55:22.545972754Z" level=info msg="StartContainer for \"53e09c4ba1823a1fe18b121209e726152b9433908e944d49548cfbe3f8f00e48\"" Jan 30 12:55:22.584088 systemd[1]: Started cri-containerd-53e09c4ba1823a1fe18b121209e726152b9433908e944d49548cfbe3f8f00e48.scope - libcontainer container 53e09c4ba1823a1fe18b121209e726152b9433908e944d49548cfbe3f8f00e48. Jan 30 12:55:22.607219 containerd[1445]: time="2025-01-30T12:55:22.607103015Z" level=info msg="StartContainer for \"53e09c4ba1823a1fe18b121209e726152b9433908e944d49548cfbe3f8f00e48\" returns successfully" Jan 30 12:55:22.777697 kubelet[2613]: I0130 12:55:22.777477 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b2jth" podStartSLOduration=3.777459582 podStartE2EDuration="3.777459582s" podCreationTimestamp="2025-01-30 12:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:55:20.759434945 +0000 UTC m=+17.204377186" watchObservedRunningTime="2025-01-30 12:55:22.777459582 +0000 UTC m=+19.222401863" Jan 30 12:55:22.778908 kubelet[2613]: I0130 12:55:22.778688 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-pchfw" podStartSLOduration=1.322361396 podStartE2EDuration="3.778667427s" podCreationTimestamp="2025-01-30 12:55:19 +0000 UTC" firstStartedPulling="2025-01-30 12:55:20.072826972 +0000 UTC m=+16.517769213" lastFinishedPulling="2025-01-30 12:55:22.529133003 +0000 UTC m=+18.974075244" observedRunningTime="2025-01-30 12:55:22.778517747 +0000 UTC m=+19.223459988" watchObservedRunningTime="2025-01-30 12:55:22.778667427 +0000 UTC m=+19.223609748" Jan 30 12:55:26.906682 kubelet[2613]: I0130 12:55:26.906626 2613 topology_manager.go:215] "Topology Admit Handler" podUID="ac8106aa-5108-4d6c-9fa9-3e60927cf3d6" podNamespace="calico-system" podName="calico-typha-6994b47d85-vb2bp" Jan 30 12:55:26.920879 systemd[1]: Created slice kubepods-besteffort-podac8106aa_5108_4d6c_9fa9_3e60927cf3d6.slice - libcontainer container kubepods-besteffort-podac8106aa_5108_4d6c_9fa9_3e60927cf3d6.slice. Jan 30 12:55:26.964981 kubelet[2613]: I0130 12:55:26.964936 2613 topology_manager.go:215] "Topology Admit Handler" podUID="8c440849-7d87-4de3-9683-8a38f28b3b1d" podNamespace="calico-system" podName="calico-node-pxzgm" Jan 30 12:55:26.974939 systemd[1]: Created slice kubepods-besteffort-pod8c440849_7d87_4de3_9683_8a38f28b3b1d.slice - libcontainer container kubepods-besteffort-pod8c440849_7d87_4de3_9683_8a38f28b3b1d.slice. Jan 30 12:55:27.011789 kubelet[2613]: I0130 12:55:27.011742 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8c440849-7d87-4de3-9683-8a38f28b3b1d-flexvol-driver-host\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.011789 kubelet[2613]: I0130 12:55:27.011785 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5jt4\" (UniqueName: \"kubernetes.io/projected/8c440849-7d87-4de3-9683-8a38f28b3b1d-kube-api-access-j5jt4\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.012013 kubelet[2613]: I0130 12:55:27.011813 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8c440849-7d87-4de3-9683-8a38f28b3b1d-policysync\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.012013 kubelet[2613]: I0130 12:55:27.011830 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c440849-7d87-4de3-9683-8a38f28b3b1d-var-lib-calico\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.012013 kubelet[2613]: I0130 12:55:27.011845 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8c440849-7d87-4de3-9683-8a38f28b3b1d-cni-bin-dir\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.012013 kubelet[2613]: I0130 12:55:27.011864 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ac8106aa-5108-4d6c-9fa9-3e60927cf3d6-typha-certs\") pod \"calico-typha-6994b47d85-vb2bp\" (UID: \"ac8106aa-5108-4d6c-9fa9-3e60927cf3d6\") " pod="calico-system/calico-typha-6994b47d85-vb2bp" Jan 30 12:55:27.012013 kubelet[2613]: I0130 12:55:27.011919 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8c440849-7d87-4de3-9683-8a38f28b3b1d-cni-net-dir\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.012125 kubelet[2613]: I0130 12:55:27.011938 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c440849-7d87-4de3-9683-8a38f28b3b1d-xtables-lock\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.012125 kubelet[2613]: I0130 12:55:27.011959 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c440849-7d87-4de3-9683-8a38f28b3b1d-tigera-ca-bundle\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.012125 kubelet[2613]: I0130 12:55:27.011980 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8c440849-7d87-4de3-9683-8a38f28b3b1d-node-certs\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.012125 kubelet[2613]: I0130 12:55:27.011994 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8c440849-7d87-4de3-9683-8a38f28b3b1d-var-run-calico\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.012125 kubelet[2613]: I0130 12:55:27.012010 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac8106aa-5108-4d6c-9fa9-3e60927cf3d6-tigera-ca-bundle\") pod \"calico-typha-6994b47d85-vb2bp\" (UID: \"ac8106aa-5108-4d6c-9fa9-3e60927cf3d6\") " pod="calico-system/calico-typha-6994b47d85-vb2bp" Jan 30 12:55:27.012229 kubelet[2613]: I0130 12:55:27.012026 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c440849-7d87-4de3-9683-8a38f28b3b1d-lib-modules\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.012229 kubelet[2613]: I0130 12:55:27.012040 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8c440849-7d87-4de3-9683-8a38f28b3b1d-cni-log-dir\") pod \"calico-node-pxzgm\" (UID: \"8c440849-7d87-4de3-9683-8a38f28b3b1d\") " pod="calico-system/calico-node-pxzgm" Jan 30 12:55:27.012229 kubelet[2613]: I0130 12:55:27.012058 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n8jw\" (UniqueName: \"kubernetes.io/projected/ac8106aa-5108-4d6c-9fa9-3e60927cf3d6-kube-api-access-7n8jw\") pod \"calico-typha-6994b47d85-vb2bp\" (UID: \"ac8106aa-5108-4d6c-9fa9-3e60927cf3d6\") " pod="calico-system/calico-typha-6994b47d85-vb2bp" Jan 30 12:55:27.078801 kubelet[2613]: I0130 12:55:27.078725 2613 topology_manager.go:215] "Topology Admit Handler" podUID="c53dd490-f49e-4931-b31d-7e8897227295" podNamespace="calico-system" podName="csi-node-driver-rx85w" Jan 30 12:55:27.079267 kubelet[2613]: E0130 12:55:27.079034 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx85w" podUID="c53dd490-f49e-4931-b31d-7e8897227295" Jan 30 12:55:27.113575 kubelet[2613]: I0130 12:55:27.113284 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c53dd490-f49e-4931-b31d-7e8897227295-kubelet-dir\") pod \"csi-node-driver-rx85w\" (UID: \"c53dd490-f49e-4931-b31d-7e8897227295\") " pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:27.113575 kubelet[2613]: I0130 12:55:27.113498 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c53dd490-f49e-4931-b31d-7e8897227295-registration-dir\") pod \"csi-node-driver-rx85w\" (UID: \"c53dd490-f49e-4931-b31d-7e8897227295\") " pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:27.113733 kubelet[2613]: I0130 12:55:27.113593 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c53dd490-f49e-4931-b31d-7e8897227295-socket-dir\") pod \"csi-node-driver-rx85w\" (UID: \"c53dd490-f49e-4931-b31d-7e8897227295\") " pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:27.113733 kubelet[2613]: I0130 12:55:27.113613 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7xnp\" (UniqueName: \"kubernetes.io/projected/c53dd490-f49e-4931-b31d-7e8897227295-kube-api-access-q7xnp\") pod \"csi-node-driver-rx85w\" (UID: \"c53dd490-f49e-4931-b31d-7e8897227295\") " pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:27.113733 kubelet[2613]: I0130 12:55:27.113651 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c53dd490-f49e-4931-b31d-7e8897227295-varrun\") pod \"csi-node-driver-rx85w\" (UID: \"c53dd490-f49e-4931-b31d-7e8897227295\") " pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:27.119040 kubelet[2613]: E0130 12:55:27.118989 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.119040 kubelet[2613]: W0130 12:55:27.119026 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.119040 kubelet[2613]: E0130 12:55:27.119062 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.120297 kubelet[2613]: E0130 12:55:27.120222 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.120297 kubelet[2613]: W0130 12:55:27.120241 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.120552 kubelet[2613]: E0130 12:55:27.120337 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.121540 kubelet[2613]: E0130 12:55:27.121513 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.121540 kubelet[2613]: W0130 12:55:27.121534 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.121766 kubelet[2613]: E0130 12:55:27.121632 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.122697 kubelet[2613]: E0130 12:55:27.122575 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.122697 kubelet[2613]: W0130 12:55:27.122592 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.122697 kubelet[2613]: E0130 12:55:27.122632 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.123553 kubelet[2613]: E0130 12:55:27.122973 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.123553 kubelet[2613]: W0130 12:55:27.122989 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.123553 kubelet[2613]: E0130 12:55:27.123026 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.123553 kubelet[2613]: E0130 12:55:27.123420 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.123553 kubelet[2613]: W0130 12:55:27.123433 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.123753 kubelet[2613]: E0130 12:55:27.123592 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.124165 kubelet[2613]: E0130 12:55:27.124049 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.124165 kubelet[2613]: W0130 12:55:27.124160 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.124165 kubelet[2613]: E0130 12:55:27.124198 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.124760 kubelet[2613]: E0130 12:55:27.124457 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.124760 kubelet[2613]: W0130 12:55:27.124473 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.124760 kubelet[2613]: E0130 12:55:27.124484 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.124760 kubelet[2613]: E0130 12:55:27.124679 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.124760 kubelet[2613]: W0130 12:55:27.124689 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.124760 kubelet[2613]: E0130 12:55:27.124698 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.125959 kubelet[2613]: E0130 12:55:27.125936 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.125959 kubelet[2613]: W0130 12:55:27.125950 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.125959 kubelet[2613]: E0130 12:55:27.125962 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.135981 kubelet[2613]: E0130 12:55:27.135546 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.135981 kubelet[2613]: W0130 12:55:27.135572 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.135981 kubelet[2613]: E0130 12:55:27.135596 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.136580 kubelet[2613]: E0130 12:55:27.136484 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.137819 kubelet[2613]: W0130 12:55:27.137792 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.137963 kubelet[2613]: E0130 12:55:27.137949 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.138710 kubelet[2613]: E0130 12:55:27.138693 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.138792 kubelet[2613]: W0130 12:55:27.138780 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.138843 kubelet[2613]: E0130 12:55:27.138832 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.143618 kubelet[2613]: E0130 12:55:27.143576 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.143618 kubelet[2613]: W0130 12:55:27.143606 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.143744 kubelet[2613]: E0130 12:55:27.143625 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.153441 kubelet[2613]: E0130 12:55:27.153413 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.153441 kubelet[2613]: W0130 12:55:27.153433 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.153633 kubelet[2613]: E0130 12:55:27.153454 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.214953 kubelet[2613]: E0130 12:55:27.214924 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.214953 kubelet[2613]: W0130 12:55:27.214946 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.215151 kubelet[2613]: E0130 12:55:27.214965 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.215279 kubelet[2613]: E0130 12:55:27.215267 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.215311 kubelet[2613]: W0130 12:55:27.215293 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.215311 kubelet[2613]: E0130 12:55:27.215308 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.215556 kubelet[2613]: E0130 12:55:27.215540 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.215556 kubelet[2613]: W0130 12:55:27.215552 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.215628 kubelet[2613]: E0130 12:55:27.215566 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.215977 kubelet[2613]: E0130 12:55:27.215961 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.215977 kubelet[2613]: W0130 12:55:27.215977 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.216045 kubelet[2613]: E0130 12:55:27.215995 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.216248 kubelet[2613]: E0130 12:55:27.216235 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.216248 kubelet[2613]: W0130 12:55:27.216247 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.216307 kubelet[2613]: E0130 12:55:27.216261 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.216478 kubelet[2613]: E0130 12:55:27.216465 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.216478 kubelet[2613]: W0130 12:55:27.216476 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.216582 kubelet[2613]: E0130 12:55:27.216553 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.216657 kubelet[2613]: E0130 12:55:27.216646 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.216657 kubelet[2613]: W0130 12:55:27.216656 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.216733 kubelet[2613]: E0130 12:55:27.216720 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.216836 kubelet[2613]: E0130 12:55:27.216826 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.216865 kubelet[2613]: W0130 12:55:27.216839 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.216940 kubelet[2613]: E0130 12:55:27.216919 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.217000 kubelet[2613]: E0130 12:55:27.216989 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.217000 kubelet[2613]: W0130 12:55:27.216999 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.217091 kubelet[2613]: E0130 12:55:27.217066 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.217151 kubelet[2613]: E0130 12:55:27.217127 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.217178 kubelet[2613]: W0130 12:55:27.217150 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.217178 kubelet[2613]: E0130 12:55:27.217164 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.217369 kubelet[2613]: E0130 12:55:27.217354 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.217416 kubelet[2613]: W0130 12:55:27.217367 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.217416 kubelet[2613]: E0130 12:55:27.217400 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.217587 kubelet[2613]: E0130 12:55:27.217576 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.217587 kubelet[2613]: W0130 12:55:27.217587 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.217646 kubelet[2613]: E0130 12:55:27.217599 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.217825 kubelet[2613]: E0130 12:55:27.217812 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.217860 kubelet[2613]: W0130 12:55:27.217824 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.217956 kubelet[2613]: E0130 12:55:27.217919 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.218014 kubelet[2613]: E0130 12:55:27.218001 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.218014 kubelet[2613]: W0130 12:55:27.218012 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.218100 kubelet[2613]: E0130 12:55:27.218039 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.218182 kubelet[2613]: E0130 12:55:27.218172 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.218182 kubelet[2613]: W0130 12:55:27.218181 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.218272 kubelet[2613]: E0130 12:55:27.218259 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.218339 kubelet[2613]: E0130 12:55:27.218328 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.218339 kubelet[2613]: W0130 12:55:27.218337 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.218412 kubelet[2613]: E0130 12:55:27.218397 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.218507 kubelet[2613]: E0130 12:55:27.218497 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.218540 kubelet[2613]: W0130 12:55:27.218506 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.218569 kubelet[2613]: E0130 12:55:27.218536 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.218682 kubelet[2613]: E0130 12:55:27.218672 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.218702 kubelet[2613]: W0130 12:55:27.218683 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.218702 kubelet[2613]: E0130 12:55:27.218698 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.218927 kubelet[2613]: E0130 12:55:27.218914 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.218927 kubelet[2613]: W0130 12:55:27.218926 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.218987 kubelet[2613]: E0130 12:55:27.218938 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.219164 kubelet[2613]: E0130 12:55:27.219153 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.219164 kubelet[2613]: W0130 12:55:27.219163 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.219232 kubelet[2613]: E0130 12:55:27.219176 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.219361 kubelet[2613]: E0130 12:55:27.219349 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.219361 kubelet[2613]: W0130 12:55:27.219359 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.219416 kubelet[2613]: E0130 12:55:27.219371 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.219774 kubelet[2613]: E0130 12:55:27.219647 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.219774 kubelet[2613]: W0130 12:55:27.219664 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.219774 kubelet[2613]: E0130 12:55:27.219680 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.219936 kubelet[2613]: E0130 12:55:27.219923 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.220003 kubelet[2613]: W0130 12:55:27.219991 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.220068 kubelet[2613]: E0130 12:55:27.220058 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.220399 kubelet[2613]: E0130 12:55:27.220379 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.220399 kubelet[2613]: W0130 12:55:27.220396 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.220470 kubelet[2613]: E0130 12:55:27.220413 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.220665 kubelet[2613]: E0130 12:55:27.220650 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.220665 kubelet[2613]: W0130 12:55:27.220663 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.220728 kubelet[2613]: E0130 12:55:27.220673 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.227833 kubelet[2613]: E0130 12:55:27.227793 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:27.231541 containerd[1445]: time="2025-01-30T12:55:27.231502091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6994b47d85-vb2bp,Uid:ac8106aa-5108-4d6c-9fa9-3e60927cf3d6,Namespace:calico-system,Attempt:0,}" Jan 30 12:55:27.238468 kubelet[2613]: E0130 12:55:27.238398 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:27.238688 kubelet[2613]: W0130 12:55:27.238499 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:27.238732 kubelet[2613]: E0130 12:55:27.238695 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:27.265195 containerd[1445]: time="2025-01-30T12:55:27.265098355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:55:27.265195 containerd[1445]: time="2025-01-30T12:55:27.265151235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:55:27.265195 containerd[1445]: time="2025-01-30T12:55:27.265162515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:27.265400 containerd[1445]: time="2025-01-30T12:55:27.265243835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:27.278145 kubelet[2613]: E0130 12:55:27.278106 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:27.279009 containerd[1445]: time="2025-01-30T12:55:27.278979198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pxzgm,Uid:8c440849-7d87-4de3-9683-8a38f28b3b1d,Namespace:calico-system,Attempt:0,}" Jan 30 12:55:27.285089 systemd[1]: Started cri-containerd-2f0ddc72503df90507c27fee42b5224aa7229322adcbd79d83e5ec8af52dd5f5.scope - libcontainer container 2f0ddc72503df90507c27fee42b5224aa7229322adcbd79d83e5ec8af52dd5f5. Jan 30 12:55:27.305335 containerd[1445]: time="2025-01-30T12:55:27.304715837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:55:27.305335 containerd[1445]: time="2025-01-30T12:55:27.305296519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:55:27.305335 containerd[1445]: time="2025-01-30T12:55:27.305354799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:27.305616 containerd[1445]: time="2025-01-30T12:55:27.305474360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:27.325058 systemd[1]: Started cri-containerd-62479c7f33b2da2006319edbfa849ab71eb4479268503c68155e7aceb19acda9.scope - libcontainer container 62479c7f33b2da2006319edbfa849ab71eb4479268503c68155e7aceb19acda9. Jan 30 12:55:27.327976 containerd[1445]: time="2025-01-30T12:55:27.327813629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6994b47d85-vb2bp,Uid:ac8106aa-5108-4d6c-9fa9-3e60927cf3d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f0ddc72503df90507c27fee42b5224aa7229322adcbd79d83e5ec8af52dd5f5\"" Jan 30 12:55:27.329541 kubelet[2613]: E0130 12:55:27.329518 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:27.331858 containerd[1445]: time="2025-01-30T12:55:27.331769281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 12:55:27.358178 containerd[1445]: time="2025-01-30T12:55:27.358035802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pxzgm,Uid:8c440849-7d87-4de3-9683-8a38f28b3b1d,Namespace:calico-system,Attempt:0,} returns sandbox id \"62479c7f33b2da2006319edbfa849ab71eb4479268503c68155e7aceb19acda9\"" Jan 30 12:55:27.359659 kubelet[2613]: E0130 12:55:27.359573 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:28.657043 kubelet[2613]: E0130 12:55:28.656986 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx85w" podUID="c53dd490-f49e-4931-b31d-7e8897227295" Jan 30 12:55:29.062757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895166308.mount: Deactivated successfully. Jan 30 12:55:30.191053 containerd[1445]: time="2025-01-30T12:55:30.190992046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:30.192271 containerd[1445]: time="2025-01-30T12:55:30.192102649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 30 12:55:30.193255 containerd[1445]: time="2025-01-30T12:55:30.193056971Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:30.196143 containerd[1445]: time="2025-01-30T12:55:30.196100659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:30.197083 containerd[1445]: time="2025-01-30T12:55:30.197049581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.86524222s" Jan 30 12:55:30.197083 containerd[1445]: time="2025-01-30T12:55:30.197084462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 12:55:30.198048 containerd[1445]: time="2025-01-30T12:55:30.198021624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 12:55:30.212209 containerd[1445]: time="2025-01-30T12:55:30.212142380Z" level=info msg="CreateContainer within sandbox \"2f0ddc72503df90507c27fee42b5224aa7229322adcbd79d83e5ec8af52dd5f5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 12:55:30.228491 containerd[1445]: time="2025-01-30T12:55:30.228293901Z" level=info msg="CreateContainer within sandbox \"2f0ddc72503df90507c27fee42b5224aa7229322adcbd79d83e5ec8af52dd5f5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f77c77281fca7b76d1152d1c698b78971c74d5f378d84e3bec7bceacaa11f2de\"" Jan 30 12:55:30.229810 containerd[1445]: time="2025-01-30T12:55:30.229624064Z" level=info msg="StartContainer for \"f77c77281fca7b76d1152d1c698b78971c74d5f378d84e3bec7bceacaa11f2de\"" Jan 30 12:55:30.261088 systemd[1]: Started cri-containerd-f77c77281fca7b76d1152d1c698b78971c74d5f378d84e3bec7bceacaa11f2de.scope - libcontainer container f77c77281fca7b76d1152d1c698b78971c74d5f378d84e3bec7bceacaa11f2de. Jan 30 12:55:30.306588 containerd[1445]: time="2025-01-30T12:55:30.306467420Z" level=info msg="StartContainer for \"f77c77281fca7b76d1152d1c698b78971c74d5f378d84e3bec7bceacaa11f2de\" returns successfully" Jan 30 12:55:30.657413 kubelet[2613]: E0130 12:55:30.656904 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx85w" podUID="c53dd490-f49e-4931-b31d-7e8897227295" Jan 30 12:55:30.775260 kubelet[2613]: E0130 12:55:30.775208 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:30.787686 kubelet[2613]: I0130 12:55:30.787527 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6994b47d85-vb2bp" podStartSLOduration=1.9211399409999999 podStartE2EDuration="4.787508685s" podCreationTimestamp="2025-01-30 12:55:26 +0000 UTC" firstStartedPulling="2025-01-30 12:55:27.33147892 +0000 UTC m=+23.776421161" lastFinishedPulling="2025-01-30 12:55:30.197847584 +0000 UTC m=+26.642789905" observedRunningTime="2025-01-30 12:55:30.787430725 +0000 UTC m=+27.232372926" watchObservedRunningTime="2025-01-30 12:55:30.787508685 +0000 UTC m=+27.232450886" Jan 30 12:55:30.815546 kubelet[2613]: E0130 12:55:30.815507 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.815546 kubelet[2613]: W0130 12:55:30.815532 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.815546 kubelet[2613]: E0130 12:55:30.815554 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.815733 kubelet[2613]: E0130 12:55:30.815717 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.815733 kubelet[2613]: W0130 12:55:30.815724 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.815733 kubelet[2613]: E0130 12:55:30.815733 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.815954 kubelet[2613]: E0130 12:55:30.815937 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.815954 kubelet[2613]: W0130 12:55:30.815948 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.816018 kubelet[2613]: E0130 12:55:30.815956 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.816112 kubelet[2613]: E0130 12:55:30.816088 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.816112 kubelet[2613]: W0130 12:55:30.816105 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.816163 kubelet[2613]: E0130 12:55:30.816115 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.816466 kubelet[2613]: E0130 12:55:30.816445 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.816466 kubelet[2613]: W0130 12:55:30.816458 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.816521 kubelet[2613]: E0130 12:55:30.816468 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.816620 kubelet[2613]: E0130 12:55:30.816610 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.816620 kubelet[2613]: W0130 12:55:30.816619 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.816667 kubelet[2613]: E0130 12:55:30.816628 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.816824 kubelet[2613]: E0130 12:55:30.816745 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.816824 kubelet[2613]: W0130 12:55:30.816755 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.816824 kubelet[2613]: E0130 12:55:30.816762 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.816965 kubelet[2613]: E0130 12:55:30.816951 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.816965 kubelet[2613]: W0130 12:55:30.816963 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.817022 kubelet[2613]: E0130 12:55:30.816972 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.817135 kubelet[2613]: E0130 12:55:30.817123 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.817135 kubelet[2613]: W0130 12:55:30.817133 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.817186 kubelet[2613]: E0130 12:55:30.817142 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.817326 kubelet[2613]: E0130 12:55:30.817314 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.817326 kubelet[2613]: W0130 12:55:30.817325 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.817369 kubelet[2613]: E0130 12:55:30.817333 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.817539 kubelet[2613]: E0130 12:55:30.817525 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.817566 kubelet[2613]: W0130 12:55:30.817539 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.817566 kubelet[2613]: E0130 12:55:30.817547 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.817693 kubelet[2613]: E0130 12:55:30.817681 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.817693 kubelet[2613]: W0130 12:55:30.817692 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.817741 kubelet[2613]: E0130 12:55:30.817699 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.817835 kubelet[2613]: E0130 12:55:30.817824 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.817854 kubelet[2613]: W0130 12:55:30.817835 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.817854 kubelet[2613]: E0130 12:55:30.817842 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.818030 kubelet[2613]: E0130 12:55:30.818020 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.818030 kubelet[2613]: W0130 12:55:30.818030 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.818078 kubelet[2613]: E0130 12:55:30.818037 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.818191 kubelet[2613]: E0130 12:55:30.818182 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.818220 kubelet[2613]: W0130 12:55:30.818191 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.818220 kubelet[2613]: E0130 12:55:30.818207 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.842775 kubelet[2613]: E0130 12:55:30.842724 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.842775 kubelet[2613]: W0130 12:55:30.842746 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.842775 kubelet[2613]: E0130 12:55:30.842766 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.843064 kubelet[2613]: E0130 12:55:30.843033 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.843064 kubelet[2613]: W0130 12:55:30.843050 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.843064 kubelet[2613]: E0130 12:55:30.843062 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.843308 kubelet[2613]: E0130 12:55:30.843280 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.843308 kubelet[2613]: W0130 12:55:30.843294 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.843308 kubelet[2613]: E0130 12:55:30.843305 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.843561 kubelet[2613]: E0130 12:55:30.843548 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.843561 kubelet[2613]: W0130 12:55:30.843559 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.843619 kubelet[2613]: E0130 12:55:30.843573 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.843758 kubelet[2613]: E0130 12:55:30.843746 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.843758 kubelet[2613]: W0130 12:55:30.843757 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.843802 kubelet[2613]: E0130 12:55:30.843769 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.843994 kubelet[2613]: E0130 12:55:30.843951 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.843994 kubelet[2613]: W0130 12:55:30.843965 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.843994 kubelet[2613]: E0130 12:55:30.843978 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.844201 kubelet[2613]: E0130 12:55:30.844179 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.844201 kubelet[2613]: W0130 12:55:30.844191 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.844349 kubelet[2613]: E0130 12:55:30.844290 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.844606 kubelet[2613]: E0130 12:55:30.844590 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.844606 kubelet[2613]: W0130 12:55:30.844601 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.844690 kubelet[2613]: E0130 12:55:30.844654 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.844875 kubelet[2613]: E0130 12:55:30.844845 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.844875 kubelet[2613]: W0130 12:55:30.844860 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.844933 kubelet[2613]: E0130 12:55:30.844914 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.845040 kubelet[2613]: E0130 12:55:30.845027 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.845040 kubelet[2613]: W0130 12:55:30.845038 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.845088 kubelet[2613]: E0130 12:55:30.845051 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.845250 kubelet[2613]: E0130 12:55:30.845233 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.845250 kubelet[2613]: W0130 12:55:30.845245 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.845303 kubelet[2613]: E0130 12:55:30.845256 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.845416 kubelet[2613]: E0130 12:55:30.845405 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.845416 kubelet[2613]: W0130 12:55:30.845415 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.845467 kubelet[2613]: E0130 12:55:30.845423 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.845636 kubelet[2613]: E0130 12:55:30.845606 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.845636 kubelet[2613]: W0130 12:55:30.845618 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.845636 kubelet[2613]: E0130 12:55:30.845628 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.846089 kubelet[2613]: E0130 12:55:30.845998 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.846089 kubelet[2613]: W0130 12:55:30.846014 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.846089 kubelet[2613]: E0130 12:55:30.846089 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.846186 kubelet[2613]: E0130 12:55:30.846173 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.846186 kubelet[2613]: W0130 12:55:30.846179 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.846237 kubelet[2613]: E0130 12:55:30.846186 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.846353 kubelet[2613]: E0130 12:55:30.846341 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.846353 kubelet[2613]: W0130 12:55:30.846352 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.846403 kubelet[2613]: E0130 12:55:30.846361 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.846532 kubelet[2613]: E0130 12:55:30.846521 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.846532 kubelet[2613]: W0130 12:55:30.846531 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.846582 kubelet[2613]: E0130 12:55:30.846538 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:30.847042 kubelet[2613]: E0130 12:55:30.847024 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 12:55:30.847042 kubelet[2613]: W0130 12:55:30.847041 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 12:55:30.847115 kubelet[2613]: E0130 12:55:30.847054 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 12:55:31.551497 containerd[1445]: time="2025-01-30T12:55:31.551444983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:31.552698 containerd[1445]: time="2025-01-30T12:55:31.552652585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 30 12:55:31.555873 containerd[1445]: time="2025-01-30T12:55:31.555841753Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:31.558999 containerd[1445]: time="2025-01-30T12:55:31.558965960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:31.559763 containerd[1445]: time="2025-01-30T12:55:31.559634682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.361576098s" Jan 30 12:55:31.559763 containerd[1445]: time="2025-01-30T12:55:31.559674362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 12:55:31.561970 containerd[1445]: time="2025-01-30T12:55:31.561939288Z" level=info msg="CreateContainer within sandbox \"62479c7f33b2da2006319edbfa849ab71eb4479268503c68155e7aceb19acda9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 12:55:31.591489 containerd[1445]: time="2025-01-30T12:55:31.591433238Z" level=info msg="CreateContainer within sandbox \"62479c7f33b2da2006319edbfa849ab71eb4479268503c68155e7aceb19acda9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d1d8ed05f06352da9b8005eb3941e50f7db0fb4fd1632955ae8256e1319f32dc\"" Jan 30 12:55:31.592241 containerd[1445]: time="2025-01-30T12:55:31.592026759Z" level=info msg="StartContainer for \"d1d8ed05f06352da9b8005eb3941e50f7db0fb4fd1632955ae8256e1319f32dc\"" Jan 30 12:55:31.630211 systemd[1]: Started cri-containerd-d1d8ed05f06352da9b8005eb3941e50f7db0fb4fd1632955ae8256e1319f32dc.scope - libcontainer container d1d8ed05f06352da9b8005eb3941e50f7db0fb4fd1632955ae8256e1319f32dc. Jan 30 12:55:31.659176 containerd[1445]: time="2025-01-30T12:55:31.658709439Z" level=info msg="StartContainer for \"d1d8ed05f06352da9b8005eb3941e50f7db0fb4fd1632955ae8256e1319f32dc\" returns successfully" Jan 30 12:55:31.697155 systemd[1]: cri-containerd-d1d8ed05f06352da9b8005eb3941e50f7db0fb4fd1632955ae8256e1319f32dc.scope: Deactivated successfully. Jan 30 12:55:31.780025 kubelet[2613]: E0130 12:55:31.779842 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:31.838542 kubelet[2613]: I0130 12:55:31.838428 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:55:31.839790 kubelet[2613]: E0130 12:55:31.839436 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:31.849819 containerd[1445]: time="2025-01-30T12:55:31.844485202Z" level=info msg="shim disconnected" id=d1d8ed05f06352da9b8005eb3941e50f7db0fb4fd1632955ae8256e1319f32dc namespace=k8s.io Jan 30 12:55:31.850028 containerd[1445]: time="2025-01-30T12:55:31.849825815Z" level=warning msg="cleaning up after shim disconnected" id=d1d8ed05f06352da9b8005eb3941e50f7db0fb4fd1632955ae8256e1319f32dc namespace=k8s.io Jan 30 12:55:31.850028 containerd[1445]: time="2025-01-30T12:55:31.849855255Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:55:32.206966 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1d8ed05f06352da9b8005eb3941e50f7db0fb4fd1632955ae8256e1319f32dc-rootfs.mount: Deactivated successfully. Jan 30 12:55:32.657559 kubelet[2613]: E0130 12:55:32.657432 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx85w" podUID="c53dd490-f49e-4931-b31d-7e8897227295" Jan 30 12:55:32.782715 kubelet[2613]: E0130 12:55:32.782437 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:32.783566 containerd[1445]: time="2025-01-30T12:55:32.783331846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 12:55:33.613049 systemd[1]: Started sshd@7-10.0.0.65:22-10.0.0.1:53280.service - OpenSSH per-connection server daemon (10.0.0.1:53280). Jan 30 12:55:33.662244 sshd[3300]: Accepted publickey for core from 10.0.0.1 port 53280 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:55:33.667304 sshd-session[3300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:33.679094 systemd-logind[1427]: New session 8 of user core. Jan 30 12:55:33.693092 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 12:55:33.863382 sshd[3303]: Connection closed by 10.0.0.1 port 53280 Jan 30 12:55:33.864014 sshd-session[3300]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:33.867236 systemd[1]: sshd@7-10.0.0.65:22-10.0.0.1:53280.service: Deactivated successfully. Jan 30 12:55:33.870345 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 12:55:33.871951 systemd-logind[1427]: Session 8 logged out. Waiting for processes to exit. Jan 30 12:55:33.873287 systemd-logind[1427]: Removed session 8. Jan 30 12:55:34.657417 kubelet[2613]: E0130 12:55:34.657357 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx85w" podUID="c53dd490-f49e-4931-b31d-7e8897227295" Jan 30 12:55:36.395299 kubelet[2613]: I0130 12:55:36.395258 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:55:36.395959 kubelet[2613]: E0130 12:55:36.395940 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:36.657281 kubelet[2613]: E0130 12:55:36.657171 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rx85w" podUID="c53dd490-f49e-4931-b31d-7e8897227295" Jan 30 12:55:36.815087 kubelet[2613]: E0130 12:55:36.815045 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:37.010912 containerd[1445]: time="2025-01-30T12:55:37.010678427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:37.011635 containerd[1445]: time="2025-01-30T12:55:37.011545709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 12:55:37.012383 containerd[1445]: time="2025-01-30T12:55:37.012216150Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:37.015457 containerd[1445]: time="2025-01-30T12:55:37.015095554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:37.016274 containerd[1445]: time="2025-01-30T12:55:37.016243516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.23287071s" Jan 30 12:55:37.016377 containerd[1445]: time="2025-01-30T12:55:37.016360676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 12:55:37.019303 containerd[1445]: time="2025-01-30T12:55:37.019181281Z" level=info msg="CreateContainer within sandbox \"62479c7f33b2da2006319edbfa849ab71eb4479268503c68155e7aceb19acda9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 12:55:37.033741 containerd[1445]: time="2025-01-30T12:55:37.033681345Z" level=info msg="CreateContainer within sandbox \"62479c7f33b2da2006319edbfa849ab71eb4479268503c68155e7aceb19acda9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"409d003fbc940c1734dfaca717d5ce8084ec12e97f4dcb8831c754619236feca\"" Jan 30 12:55:37.034506 containerd[1445]: time="2025-01-30T12:55:37.034480306Z" level=info msg="StartContainer for \"409d003fbc940c1734dfaca717d5ce8084ec12e97f4dcb8831c754619236feca\"" Jan 30 12:55:37.067111 systemd[1]: Started cri-containerd-409d003fbc940c1734dfaca717d5ce8084ec12e97f4dcb8831c754619236feca.scope - libcontainer container 409d003fbc940c1734dfaca717d5ce8084ec12e97f4dcb8831c754619236feca. Jan 30 12:55:37.098524 containerd[1445]: time="2025-01-30T12:55:37.098396209Z" level=info msg="StartContainer for \"409d003fbc940c1734dfaca717d5ce8084ec12e97f4dcb8831c754619236feca\" returns successfully" Jan 30 12:55:37.693211 systemd[1]: cri-containerd-409d003fbc940c1734dfaca717d5ce8084ec12e97f4dcb8831c754619236feca.scope: Deactivated successfully. Jan 30 12:55:37.713741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-409d003fbc940c1734dfaca717d5ce8084ec12e97f4dcb8831c754619236feca-rootfs.mount: Deactivated successfully. Jan 30 12:55:37.750968 kubelet[2613]: I0130 12:55:37.750938 2613 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 12:55:37.794767 containerd[1445]: time="2025-01-30T12:55:37.794699338Z" level=info msg="shim disconnected" id=409d003fbc940c1734dfaca717d5ce8084ec12e97f4dcb8831c754619236feca namespace=k8s.io Jan 30 12:55:37.794767 containerd[1445]: time="2025-01-30T12:55:37.794761098Z" level=warning msg="cleaning up after shim disconnected" id=409d003fbc940c1734dfaca717d5ce8084ec12e97f4dcb8831c754619236feca namespace=k8s.io Jan 30 12:55:37.794767 containerd[1445]: time="2025-01-30T12:55:37.794768978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:55:37.819844 kubelet[2613]: I0130 12:55:37.819795 2613 topology_manager.go:215] "Topology Admit Handler" podUID="cf956daa-f7be-4272-9ce2-d1c78864d112" podNamespace="calico-apiserver" podName="calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:37.825784 kubelet[2613]: I0130 12:55:37.825730 2613 topology_manager.go:215] "Topology Admit Handler" podUID="0033dce8-2617-4a93-af94-68a801729315" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:37.826216 kubelet[2613]: I0130 12:55:37.825951 2613 topology_manager.go:215] "Topology Admit Handler" podUID="97079026-7789-44af-adb9-0f82fffc0d08" podNamespace="calico-system" podName="calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:37.826216 kubelet[2613]: I0130 12:55:37.826053 2613 topology_manager.go:215] "Topology Admit Handler" podUID="f805eb31-5209-4788-beca-600c9f139d8b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2582n" Jan 30 12:55:37.827346 kubelet[2613]: I0130 12:55:37.826988 2613 topology_manager.go:215] "Topology Admit Handler" podUID="dad53f38-004e-44aa-900a-c314d9a9e9de" podNamespace="calico-apiserver" podName="calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:37.834764 kubelet[2613]: E0130 12:55:37.834524 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:37.835693 containerd[1445]: time="2025-01-30T12:55:37.835642364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 12:55:37.835745 systemd[1]: Created slice kubepods-besteffort-podcf956daa_f7be_4272_9ce2_d1c78864d112.slice - libcontainer container kubepods-besteffort-podcf956daa_f7be_4272_9ce2_d1c78864d112.slice. Jan 30 12:55:37.843551 systemd[1]: Created slice kubepods-burstable-podf805eb31_5209_4788_beca_600c9f139d8b.slice - libcontainer container kubepods-burstable-podf805eb31_5209_4788_beca_600c9f139d8b.slice. Jan 30 12:55:37.855123 systemd[1]: Created slice kubepods-besteffort-poddad53f38_004e_44aa_900a_c314d9a9e9de.slice - libcontainer container kubepods-besteffort-poddad53f38_004e_44aa_900a_c314d9a9e9de.slice. Jan 30 12:55:37.859986 systemd[1]: Created slice kubepods-besteffort-pod97079026_7789_44af_adb9_0f82fffc0d08.slice - libcontainer container kubepods-besteffort-pod97079026_7789_44af_adb9_0f82fffc0d08.slice. Jan 30 12:55:37.868052 systemd[1]: Created slice kubepods-burstable-pod0033dce8_2617_4a93_af94_68a801729315.slice - libcontainer container kubepods-burstable-pod0033dce8_2617_4a93_af94_68a801729315.slice. Jan 30 12:55:37.899544 kubelet[2613]: I0130 12:55:37.899491 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97079026-7789-44af-adb9-0f82fffc0d08-tigera-ca-bundle\") pod \"calico-kube-controllers-59db857b5c-zqhl4\" (UID: \"97079026-7789-44af-adb9-0f82fffc0d08\") " pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:37.899544 kubelet[2613]: I0130 12:55:37.899547 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cwnp\" (UniqueName: \"kubernetes.io/projected/97079026-7789-44af-adb9-0f82fffc0d08-kube-api-access-8cwnp\") pod \"calico-kube-controllers-59db857b5c-zqhl4\" (UID: \"97079026-7789-44af-adb9-0f82fffc0d08\") " pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:37.899713 kubelet[2613]: I0130 12:55:37.899574 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dad53f38-004e-44aa-900a-c314d9a9e9de-calico-apiserver-certs\") pod \"calico-apiserver-ccfbbf7fd-c9cpd\" (UID: \"dad53f38-004e-44aa-900a-c314d9a9e9de\") " pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:37.899713 kubelet[2613]: I0130 12:55:37.899593 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97qvd\" (UniqueName: \"kubernetes.io/projected/0033dce8-2617-4a93-af94-68a801729315-kube-api-access-97qvd\") pod \"coredns-7db6d8ff4d-nsdsm\" (UID: \"0033dce8-2617-4a93-af94-68a801729315\") " pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:37.899713 kubelet[2613]: I0130 12:55:37.899624 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snvr7\" (UniqueName: \"kubernetes.io/projected/f805eb31-5209-4788-beca-600c9f139d8b-kube-api-access-snvr7\") pod \"coredns-7db6d8ff4d-2582n\" (UID: \"f805eb31-5209-4788-beca-600c9f139d8b\") " pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:37.899713 kubelet[2613]: I0130 12:55:37.899646 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0033dce8-2617-4a93-af94-68a801729315-config-volume\") pod \"coredns-7db6d8ff4d-nsdsm\" (UID: \"0033dce8-2617-4a93-af94-68a801729315\") " pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:37.899713 kubelet[2613]: I0130 12:55:37.899677 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc57d\" (UniqueName: \"kubernetes.io/projected/cf956daa-f7be-4272-9ce2-d1c78864d112-kube-api-access-xc57d\") pod \"calico-apiserver-ccfbbf7fd-dg5hc\" (UID: \"cf956daa-f7be-4272-9ce2-d1c78864d112\") " pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:37.899880 kubelet[2613]: I0130 12:55:37.899696 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bsjb\" (UniqueName: \"kubernetes.io/projected/dad53f38-004e-44aa-900a-c314d9a9e9de-kube-api-access-5bsjb\") pod \"calico-apiserver-ccfbbf7fd-c9cpd\" (UID: \"dad53f38-004e-44aa-900a-c314d9a9e9de\") " pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:37.899880 kubelet[2613]: I0130 12:55:37.899723 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f805eb31-5209-4788-beca-600c9f139d8b-config-volume\") pod \"coredns-7db6d8ff4d-2582n\" (UID: \"f805eb31-5209-4788-beca-600c9f139d8b\") " pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:37.899880 kubelet[2613]: I0130 12:55:37.899741 2613 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cf956daa-f7be-4272-9ce2-d1c78864d112-calico-apiserver-certs\") pod \"calico-apiserver-ccfbbf7fd-dg5hc\" (UID: \"cf956daa-f7be-4272-9ce2-d1c78864d112\") " pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:38.142491 containerd[1445]: time="2025-01-30T12:55:38.142372567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:0,}" Jan 30 12:55:38.151810 kubelet[2613]: E0130 12:55:38.151769 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:38.152754 containerd[1445]: time="2025-01-30T12:55:38.152357702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:0,}" Jan 30 12:55:38.158776 containerd[1445]: time="2025-01-30T12:55:38.158710032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:0,}" Jan 30 12:55:38.166309 containerd[1445]: time="2025-01-30T12:55:38.166250523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:0,}" Jan 30 12:55:38.173991 kubelet[2613]: E0130 12:55:38.173939 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:38.176211 containerd[1445]: time="2025-01-30T12:55:38.176167338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:0,}" Jan 30 12:55:38.574558 containerd[1445]: time="2025-01-30T12:55:38.574505464Z" level=error msg="Failed to destroy network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.576748 containerd[1445]: time="2025-01-30T12:55:38.576703267Z" level=error msg="encountered an error cleaning up failed sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.577100 containerd[1445]: time="2025-01-30T12:55:38.576954067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.581682 containerd[1445]: time="2025-01-30T12:55:38.581562714Z" level=error msg="Failed to destroy network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.581776 kubelet[2613]: E0130 12:55:38.581705 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.581859 kubelet[2613]: E0130 12:55:38.581794 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:38.581859 kubelet[2613]: E0130 12:55:38.581817 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:38.581926 kubelet[2613]: E0130 12:55:38.581876 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfbbf7fd-c9cpd_calico-apiserver(dad53f38-004e-44aa-900a-c314d9a9e9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfbbf7fd-c9cpd_calico-apiserver(dad53f38-004e-44aa-900a-c314d9a9e9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" podUID="dad53f38-004e-44aa-900a-c314d9a9e9de" Jan 30 12:55:38.582095 containerd[1445]: time="2025-01-30T12:55:38.581496754Z" level=error msg="Failed to destroy network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.582852 containerd[1445]: time="2025-01-30T12:55:38.582821636Z" level=error msg="encountered an error cleaning up failed sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.583687 containerd[1445]: time="2025-01-30T12:55:38.583618478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.583977 containerd[1445]: time="2025-01-30T12:55:38.583524797Z" level=error msg="encountered an error cleaning up failed sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.584122 kubelet[2613]: E0130 12:55:38.584044 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.584122 kubelet[2613]: E0130 12:55:38.584097 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:38.584122 kubelet[2613]: E0130 12:55:38.584118 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:38.584299 containerd[1445]: time="2025-01-30T12:55:38.584062038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.584392 kubelet[2613]: E0130 12:55:38.584172 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59db857b5c-zqhl4_calico-system(97079026-7789-44af-adb9-0f82fffc0d08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59db857b5c-zqhl4_calico-system(97079026-7789-44af-adb9-0f82fffc0d08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" podUID="97079026-7789-44af-adb9-0f82fffc0d08" Jan 30 12:55:38.584392 kubelet[2613]: E0130 12:55:38.584244 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.584392 kubelet[2613]: E0130 12:55:38.584267 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:38.584557 kubelet[2613]: E0130 12:55:38.584279 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:38.584557 kubelet[2613]: E0130 12:55:38.584303 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfbbf7fd-dg5hc_calico-apiserver(cf956daa-f7be-4272-9ce2-d1c78864d112)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfbbf7fd-dg5hc_calico-apiserver(cf956daa-f7be-4272-9ce2-d1c78864d112)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" podUID="cf956daa-f7be-4272-9ce2-d1c78864d112" Jan 30 12:55:38.586052 containerd[1445]: time="2025-01-30T12:55:38.586002681Z" level=error msg="Failed to destroy network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.586501 containerd[1445]: time="2025-01-30T12:55:38.586419282Z" level=error msg="encountered an error cleaning up failed sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.586568 containerd[1445]: time="2025-01-30T12:55:38.586530762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.587399 kubelet[2613]: E0130 12:55:38.587209 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.587399 kubelet[2613]: E0130 12:55:38.587292 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:38.587399 kubelet[2613]: E0130 12:55:38.587314 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:38.587535 kubelet[2613]: E0130 12:55:38.587356 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2582n_kube-system(f805eb31-5209-4788-beca-600c9f139d8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2582n_kube-system(f805eb31-5209-4788-beca-600c9f139d8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2582n" podUID="f805eb31-5209-4788-beca-600c9f139d8b" Jan 30 12:55:38.588680 containerd[1445]: time="2025-01-30T12:55:38.588269165Z" level=error msg="Failed to destroy network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.588680 containerd[1445]: time="2025-01-30T12:55:38.588604365Z" level=error msg="encountered an error cleaning up failed sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.588680 containerd[1445]: time="2025-01-30T12:55:38.588659725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.588960 kubelet[2613]: E0130 12:55:38.588915 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.589027 kubelet[2613]: E0130 12:55:38.588978 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:38.589027 kubelet[2613]: E0130 12:55:38.588998 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:38.589081 kubelet[2613]: E0130 12:55:38.589040 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nsdsm_kube-system(0033dce8-2617-4a93-af94-68a801729315)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nsdsm_kube-system(0033dce8-2617-4a93-af94-68a801729315)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nsdsm" podUID="0033dce8-2617-4a93-af94-68a801729315" Jan 30 12:55:38.662684 systemd[1]: Created slice kubepods-besteffort-podc53dd490_f49e_4931_b31d_7e8897227295.slice - libcontainer container kubepods-besteffort-podc53dd490_f49e_4931_b31d_7e8897227295.slice. Jan 30 12:55:38.665881 containerd[1445]: time="2025-01-30T12:55:38.665841482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:0,}" Jan 30 12:55:38.744720 containerd[1445]: time="2025-01-30T12:55:38.744386042Z" level=error msg="Failed to destroy network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.745369 containerd[1445]: time="2025-01-30T12:55:38.745185603Z" level=error msg="encountered an error cleaning up failed sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.745369 containerd[1445]: time="2025-01-30T12:55:38.745264083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.745568 kubelet[2613]: E0130 12:55:38.745524 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:38.745619 kubelet[2613]: E0130 12:55:38.745584 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:38.745619 kubelet[2613]: E0130 12:55:38.745604 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:38.745722 kubelet[2613]: E0130 12:55:38.745647 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rx85w_calico-system(c53dd490-f49e-4931-b31d-7e8897227295)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rx85w_calico-system(c53dd490-f49e-4931-b31d-7e8897227295)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rx85w" podUID="c53dd490-f49e-4931-b31d-7e8897227295" Jan 30 12:55:38.837627 kubelet[2613]: I0130 12:55:38.837159 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc" Jan 30 12:55:38.840132 kubelet[2613]: I0130 12:55:38.840104 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5" Jan 30 12:55:38.840646 containerd[1445]: time="2025-01-30T12:55:38.839677627Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\"" Jan 30 12:55:38.840646 containerd[1445]: time="2025-01-30T12:55:38.840433468Z" level=info msg="Ensure that sandbox 165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc in task-service has been cleanup successfully" Jan 30 12:55:38.841051 containerd[1445]: time="2025-01-30T12:55:38.840924468Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\"" Jan 30 12:55:38.841461 containerd[1445]: time="2025-01-30T12:55:38.841320109Z" level=info msg="TearDown network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" successfully" Jan 30 12:55:38.841461 containerd[1445]: time="2025-01-30T12:55:38.841345109Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" returns successfully" Jan 30 12:55:38.842917 containerd[1445]: time="2025-01-30T12:55:38.842039910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:1,}" Jan 30 12:55:38.842917 containerd[1445]: time="2025-01-30T12:55:38.842452951Z" level=info msg="Ensure that sandbox 4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5 in task-service has been cleanup successfully" Jan 30 12:55:38.842917 containerd[1445]: time="2025-01-30T12:55:38.842709071Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\"" Jan 30 12:55:38.842917 containerd[1445]: time="2025-01-30T12:55:38.842832991Z" level=info msg="Ensure that sandbox 60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62 in task-service has been cleanup successfully" Jan 30 12:55:38.843132 kubelet[2613]: I0130 12:55:38.842043 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62" Jan 30 12:55:38.843181 containerd[1445]: time="2025-01-30T12:55:38.842948832Z" level=info msg="TearDown network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" successfully" Jan 30 12:55:38.843181 containerd[1445]: time="2025-01-30T12:55:38.842963072Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" returns successfully" Jan 30 12:55:38.843471 containerd[1445]: time="2025-01-30T12:55:38.843443112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:1,}" Jan 30 12:55:38.843859 containerd[1445]: time="2025-01-30T12:55:38.843836193Z" level=info msg="TearDown network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" successfully" Jan 30 12:55:38.843859 containerd[1445]: time="2025-01-30T12:55:38.843858433Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" returns successfully" Jan 30 12:55:38.844088 kubelet[2613]: E0130 12:55:38.844068 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:38.844575 kubelet[2613]: I0130 12:55:38.844463 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362" Jan 30 12:55:38.845004 containerd[1445]: time="2025-01-30T12:55:38.844786514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:1,}" Jan 30 12:55:38.845962 containerd[1445]: time="2025-01-30T12:55:38.845908476Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\"" Jan 30 12:55:38.846775 containerd[1445]: time="2025-01-30T12:55:38.846075156Z" level=info msg="Ensure that sandbox c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362 in task-service has been cleanup successfully" Jan 30 12:55:38.846775 containerd[1445]: time="2025-01-30T12:55:38.846352197Z" level=info msg="TearDown network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" successfully" Jan 30 12:55:38.846775 containerd[1445]: time="2025-01-30T12:55:38.846373757Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" returns successfully" Jan 30 12:55:38.846866 kubelet[2613]: I0130 12:55:38.846128 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9" Jan 30 12:55:38.846928 containerd[1445]: time="2025-01-30T12:55:38.846791877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:1,}" Jan 30 12:55:38.847008 containerd[1445]: time="2025-01-30T12:55:38.846966198Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\"" Jan 30 12:55:38.847273 containerd[1445]: time="2025-01-30T12:55:38.847190438Z" level=info msg="Ensure that sandbox 0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9 in task-service has been cleanup successfully" Jan 30 12:55:38.847787 containerd[1445]: time="2025-01-30T12:55:38.847676479Z" level=info msg="TearDown network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" successfully" Jan 30 12:55:38.847787 containerd[1445]: time="2025-01-30T12:55:38.847725319Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" returns successfully" Jan 30 12:55:38.848223 kubelet[2613]: E0130 12:55:38.848044 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:38.848943 containerd[1445]: time="2025-01-30T12:55:38.848866881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:1,}" Jan 30 12:55:38.849100 kubelet[2613]: I0130 12:55:38.849077 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea" Jan 30 12:55:38.849656 containerd[1445]: time="2025-01-30T12:55:38.849626642Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\"" Jan 30 12:55:38.849811 containerd[1445]: time="2025-01-30T12:55:38.849784762Z" level=info msg="Ensure that sandbox 8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea in task-service has been cleanup successfully" Jan 30 12:55:38.850416 containerd[1445]: time="2025-01-30T12:55:38.850010962Z" level=info msg="TearDown network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" successfully" Jan 30 12:55:38.850416 containerd[1445]: time="2025-01-30T12:55:38.850028282Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" returns successfully" Jan 30 12:55:38.850915 containerd[1445]: time="2025-01-30T12:55:38.850790283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:1,}" Jan 30 12:55:38.884949 systemd[1]: Started sshd@8-10.0.0.65:22-10.0.0.1:53358.service - OpenSSH per-connection server daemon (10.0.0.1:53358). Jan 30 12:55:38.944375 sshd[3608]: Accepted publickey for core from 10.0.0.1 port 53358 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:55:38.946448 sshd-session[3608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:38.953731 systemd-logind[1427]: New session 9 of user core. Jan 30 12:55:38.960119 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 12:55:39.018501 containerd[1445]: time="2025-01-30T12:55:39.018450257Z" level=error msg="Failed to destroy network for sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.022914 containerd[1445]: time="2025-01-30T12:55:39.021692421Z" level=error msg="encountered an error cleaning up failed sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.025992 containerd[1445]: time="2025-01-30T12:55:39.025939427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.026290 kubelet[2613]: E0130 12:55:39.026254 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.026348 kubelet[2613]: E0130 12:55:39.026314 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:39.026348 kubelet[2613]: E0130 12:55:39.026334 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:39.028236 kubelet[2613]: E0130 12:55:39.026375 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59db857b5c-zqhl4_calico-system(97079026-7789-44af-adb9-0f82fffc0d08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59db857b5c-zqhl4_calico-system(97079026-7789-44af-adb9-0f82fffc0d08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" podUID="97079026-7789-44af-adb9-0f82fffc0d08" Jan 30 12:55:39.040009 systemd[1]: run-netns-cni\x2d268cce31\x2d4b3a\x2da380\x2d531c\x2d523a1ed46cff.mount: Deactivated successfully. Jan 30 12:55:39.040156 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9-shm.mount: Deactivated successfully. Jan 30 12:55:39.040261 systemd[1]: run-netns-cni\x2d59d4d002\x2dc8a3\x2d5582\x2d38ce\x2d6163cc6c5184.mount: Deactivated successfully. Jan 30 12:55:39.040343 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea-shm.mount: Deactivated successfully. Jan 30 12:55:39.092338 containerd[1445]: time="2025-01-30T12:55:39.091039440Z" level=error msg="Failed to destroy network for sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.092338 containerd[1445]: time="2025-01-30T12:55:39.091467601Z" level=error msg="encountered an error cleaning up failed sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.092338 containerd[1445]: time="2025-01-30T12:55:39.091536441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.092591 kubelet[2613]: E0130 12:55:39.091829 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.092591 kubelet[2613]: E0130 12:55:39.091880 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:39.092591 kubelet[2613]: E0130 12:55:39.092205 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:39.092756 kubelet[2613]: E0130 12:55:39.092248 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rx85w_calico-system(c53dd490-f49e-4931-b31d-7e8897227295)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rx85w_calico-system(c53dd490-f49e-4931-b31d-7e8897227295)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rx85w" podUID="c53dd490-f49e-4931-b31d-7e8897227295" Jan 30 12:55:39.093557 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03-shm.mount: Deactivated successfully. Jan 30 12:55:39.147618 containerd[1445]: time="2025-01-30T12:55:39.147564121Z" level=error msg="Failed to destroy network for sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.152341 containerd[1445]: time="2025-01-30T12:55:39.149393163Z" level=error msg="encountered an error cleaning up failed sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.152341 containerd[1445]: time="2025-01-30T12:55:39.149470723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.152611 kubelet[2613]: E0130 12:55:39.151826 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.152611 kubelet[2613]: E0130 12:55:39.151902 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:39.152611 kubelet[2613]: E0130 12:55:39.151926 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:39.151249 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8-shm.mount: Deactivated successfully. Jan 30 12:55:39.153381 kubelet[2613]: E0130 12:55:39.151965 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nsdsm_kube-system(0033dce8-2617-4a93-af94-68a801729315)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nsdsm_kube-system(0033dce8-2617-4a93-af94-68a801729315)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nsdsm" podUID="0033dce8-2617-4a93-af94-68a801729315" Jan 30 12:55:39.155199 containerd[1445]: time="2025-01-30T12:55:39.155043851Z" level=error msg="Failed to destroy network for sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.156858 containerd[1445]: time="2025-01-30T12:55:39.156632693Z" level=error msg="encountered an error cleaning up failed sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.157136 containerd[1445]: time="2025-01-30T12:55:39.156964854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.157388 kubelet[2613]: E0130 12:55:39.157350 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.157455 kubelet[2613]: E0130 12:55:39.157406 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:39.157455 kubelet[2613]: E0130 12:55:39.157425 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:39.157525 kubelet[2613]: E0130 12:55:39.157473 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfbbf7fd-dg5hc_calico-apiserver(cf956daa-f7be-4272-9ce2-d1c78864d112)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfbbf7fd-dg5hc_calico-apiserver(cf956daa-f7be-4272-9ce2-d1c78864d112)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" podUID="cf956daa-f7be-4272-9ce2-d1c78864d112" Jan 30 12:55:39.191401 sshd[3622]: Connection closed by 10.0.0.1 port 53358 Jan 30 12:55:39.194064 sshd-session[3608]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:39.196806 containerd[1445]: time="2025-01-30T12:55:39.196755231Z" level=error msg="Failed to destroy network for sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.198182 containerd[1445]: time="2025-01-30T12:55:39.197556712Z" level=error msg="encountered an error cleaning up failed sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.198182 containerd[1445]: time="2025-01-30T12:55:39.197707472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.197452 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 12:55:39.200178 kubelet[2613]: E0130 12:55:39.198656 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.200178 kubelet[2613]: E0130 12:55:39.198719 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:39.200178 kubelet[2613]: E0130 12:55:39.198737 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:39.199329 systemd[1]: sshd@8-10.0.0.65:22-10.0.0.1:53358.service: Deactivated successfully. Jan 30 12:55:39.200381 kubelet[2613]: E0130 12:55:39.198802 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2582n_kube-system(f805eb31-5209-4788-beca-600c9f139d8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2582n_kube-system(f805eb31-5209-4788-beca-600c9f139d8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2582n" podUID="f805eb31-5209-4788-beca-600c9f139d8b" Jan 30 12:55:39.202594 systemd-logind[1427]: Session 9 logged out. Waiting for processes to exit. Jan 30 12:55:39.206103 systemd-logind[1427]: Removed session 9. Jan 30 12:55:39.206402 containerd[1445]: time="2025-01-30T12:55:39.206124364Z" level=error msg="Failed to destroy network for sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.215393 containerd[1445]: time="2025-01-30T12:55:39.215283657Z" level=error msg="encountered an error cleaning up failed sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.215393 containerd[1445]: time="2025-01-30T12:55:39.215366817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.216157 kubelet[2613]: E0130 12:55:39.215579 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.216157 kubelet[2613]: E0130 12:55:39.215635 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:39.216157 kubelet[2613]: E0130 12:55:39.215655 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:39.216464 kubelet[2613]: E0130 12:55:39.215691 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfbbf7fd-c9cpd_calico-apiserver(dad53f38-004e-44aa-900a-c314d9a9e9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfbbf7fd-c9cpd_calico-apiserver(dad53f38-004e-44aa-900a-c314d9a9e9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" podUID="dad53f38-004e-44aa-900a-c314d9a9e9de" Jan 30 12:55:39.852704 kubelet[2613]: I0130 12:55:39.852651 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80" Jan 30 12:55:39.853803 containerd[1445]: time="2025-01-30T12:55:39.853573446Z" level=info msg="StopPodSandbox for \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\"" Jan 30 12:55:39.853803 containerd[1445]: time="2025-01-30T12:55:39.853753726Z" level=info msg="Ensure that sandbox 33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80 in task-service has been cleanup successfully" Jan 30 12:55:39.854117 containerd[1445]: time="2025-01-30T12:55:39.853963767Z" level=info msg="TearDown network for sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" successfully" Jan 30 12:55:39.854117 containerd[1445]: time="2025-01-30T12:55:39.853979167Z" level=info msg="StopPodSandbox for \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" returns successfully" Jan 30 12:55:39.854612 containerd[1445]: time="2025-01-30T12:55:39.854510367Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\"" Jan 30 12:55:39.854612 containerd[1445]: time="2025-01-30T12:55:39.854594048Z" level=info msg="TearDown network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" successfully" Jan 30 12:55:39.854612 containerd[1445]: time="2025-01-30T12:55:39.854604848Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" returns successfully" Jan 30 12:55:39.856108 containerd[1445]: time="2025-01-30T12:55:39.855779129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:2,}" Jan 30 12:55:39.856768 kubelet[2613]: I0130 12:55:39.856389 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1" Jan 30 12:55:39.857095 containerd[1445]: time="2025-01-30T12:55:39.857062531Z" level=info msg="StopPodSandbox for \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\"" Jan 30 12:55:39.857489 containerd[1445]: time="2025-01-30T12:55:39.857235851Z" level=info msg="Ensure that sandbox 44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1 in task-service has been cleanup successfully" Jan 30 12:55:39.857489 containerd[1445]: time="2025-01-30T12:55:39.857458172Z" level=info msg="TearDown network for sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" successfully" Jan 30 12:55:39.857489 containerd[1445]: time="2025-01-30T12:55:39.857471612Z" level=info msg="StopPodSandbox for \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" returns successfully" Jan 30 12:55:39.858147 containerd[1445]: time="2025-01-30T12:55:39.857847772Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\"" Jan 30 12:55:39.858147 containerd[1445]: time="2025-01-30T12:55:39.857962652Z" level=info msg="TearDown network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" successfully" Jan 30 12:55:39.858147 containerd[1445]: time="2025-01-30T12:55:39.857974212Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" returns successfully" Jan 30 12:55:39.858423 kubelet[2613]: I0130 12:55:39.858405 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03" Jan 30 12:55:39.858468 containerd[1445]: time="2025-01-30T12:55:39.858426053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:2,}" Jan 30 12:55:39.862174 containerd[1445]: time="2025-01-30T12:55:39.862113858Z" level=info msg="StopPodSandbox for \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\"" Jan 30 12:55:39.862560 containerd[1445]: time="2025-01-30T12:55:39.862394019Z" level=info msg="Ensure that sandbox b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03 in task-service has been cleanup successfully" Jan 30 12:55:39.862960 containerd[1445]: time="2025-01-30T12:55:39.862789619Z" level=info msg="TearDown network for sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" successfully" Jan 30 12:55:39.862960 containerd[1445]: time="2025-01-30T12:55:39.862809299Z" level=info msg="StopPodSandbox for \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" returns successfully" Jan 30 12:55:39.867483 containerd[1445]: time="2025-01-30T12:55:39.865086463Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\"" Jan 30 12:55:39.867483 containerd[1445]: time="2025-01-30T12:55:39.865207903Z" level=info msg="TearDown network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" successfully" Jan 30 12:55:39.867483 containerd[1445]: time="2025-01-30T12:55:39.865219463Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" returns successfully" Jan 30 12:55:39.867483 containerd[1445]: time="2025-01-30T12:55:39.865981624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:2,}" Jan 30 12:55:39.867955 kubelet[2613]: I0130 12:55:39.866754 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8" Jan 30 12:55:39.893022 kubelet[2613]: I0130 12:55:39.892980 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8" Jan 30 12:55:39.894522 containerd[1445]: time="2025-01-30T12:55:39.868351787Z" level=info msg="StopPodSandbox for \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\"" Jan 30 12:55:39.894942 containerd[1445]: time="2025-01-30T12:55:39.894850105Z" level=info msg="Ensure that sandbox 685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8 in task-service has been cleanup successfully" Jan 30 12:55:39.895163 containerd[1445]: time="2025-01-30T12:55:39.895098545Z" level=info msg="StopPodSandbox for \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\"" Jan 30 12:55:39.895217 containerd[1445]: time="2025-01-30T12:55:39.895175585Z" level=info msg="TearDown network for sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" successfully" Jan 30 12:55:39.895217 containerd[1445]: time="2025-01-30T12:55:39.895193185Z" level=info msg="StopPodSandbox for \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" returns successfully" Jan 30 12:55:39.895384 containerd[1445]: time="2025-01-30T12:55:39.895330106Z" level=info msg="Ensure that sandbox c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8 in task-service has been cleanup successfully" Jan 30 12:55:39.895985 containerd[1445]: time="2025-01-30T12:55:39.895956587Z" level=info msg="TearDown network for sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" successfully" Jan 30 12:55:39.896051 containerd[1445]: time="2025-01-30T12:55:39.895983667Z" level=info msg="StopPodSandbox for \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" returns successfully" Jan 30 12:55:39.898638 containerd[1445]: time="2025-01-30T12:55:39.898592150Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\"" Jan 30 12:55:39.898730 containerd[1445]: time="2025-01-30T12:55:39.898693910Z" level=info msg="TearDown network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" successfully" Jan 30 12:55:39.898730 containerd[1445]: time="2025-01-30T12:55:39.898704590Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" returns successfully" Jan 30 12:55:39.899050 containerd[1445]: time="2025-01-30T12:55:39.898975591Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\"" Jan 30 12:55:39.899115 containerd[1445]: time="2025-01-30T12:55:39.899073671Z" level=info msg="TearDown network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" successfully" Jan 30 12:55:39.899115 containerd[1445]: time="2025-01-30T12:55:39.899087551Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" returns successfully" Jan 30 12:55:39.899174 kubelet[2613]: E0130 12:55:39.899067 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:39.899525 containerd[1445]: time="2025-01-30T12:55:39.899363511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:2,}" Jan 30 12:55:39.900101 kubelet[2613]: E0130 12:55:39.900076 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:39.900400 kubelet[2613]: I0130 12:55:39.900365 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b" Jan 30 12:55:39.900824 containerd[1445]: time="2025-01-30T12:55:39.900792833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:2,}" Jan 30 12:55:39.901482 containerd[1445]: time="2025-01-30T12:55:39.901072674Z" level=info msg="StopPodSandbox for \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\"" Jan 30 12:55:39.901482 containerd[1445]: time="2025-01-30T12:55:39.901339394Z" level=info msg="Ensure that sandbox 63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b in task-service has been cleanup successfully" Jan 30 12:55:39.902565 containerd[1445]: time="2025-01-30T12:55:39.902510036Z" level=info msg="TearDown network for sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" successfully" Jan 30 12:55:39.902565 containerd[1445]: time="2025-01-30T12:55:39.902561796Z" level=info msg="StopPodSandbox for \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" returns successfully" Jan 30 12:55:39.903182 containerd[1445]: time="2025-01-30T12:55:39.903114117Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\"" Jan 30 12:55:39.903381 containerd[1445]: time="2025-01-30T12:55:39.903351557Z" level=info msg="TearDown network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" successfully" Jan 30 12:55:39.903381 containerd[1445]: time="2025-01-30T12:55:39.903377157Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" returns successfully" Jan 30 12:55:39.904576 containerd[1445]: time="2025-01-30T12:55:39.904065398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:2,}" Jan 30 12:55:39.980331 containerd[1445]: time="2025-01-30T12:55:39.980277587Z" level=error msg="Failed to destroy network for sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.980680 containerd[1445]: time="2025-01-30T12:55:39.980626027Z" level=error msg="encountered an error cleaning up failed sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.980680 containerd[1445]: time="2025-01-30T12:55:39.980683227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.982077 kubelet[2613]: E0130 12:55:39.981394 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:39.982077 kubelet[2613]: E0130 12:55:39.981451 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:39.982077 kubelet[2613]: E0130 12:55:39.981471 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:39.982716 kubelet[2613]: E0130 12:55:39.981513 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59db857b5c-zqhl4_calico-system(97079026-7789-44af-adb9-0f82fffc0d08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59db857b5c-zqhl4_calico-system(97079026-7789-44af-adb9-0f82fffc0d08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" podUID="97079026-7789-44af-adb9-0f82fffc0d08" Jan 30 12:55:40.037218 systemd[1]: run-netns-cni\x2d456c3298\x2da0ca\x2d4ab2\x2da0e5\x2d2fe5976c5264.mount: Deactivated successfully. Jan 30 12:55:40.037318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1-shm.mount: Deactivated successfully. Jan 30 12:55:40.037376 systemd[1]: run-netns-cni\x2d112433eb\x2d3a5b\x2df2d8\x2d51f9\x2d71f84b43e94f.mount: Deactivated successfully. Jan 30 12:55:40.037426 systemd[1]: run-netns-cni\x2dbbe1d02f\x2d8839\x2d1a6a\x2d32f4\x2da56924477222.mount: Deactivated successfully. Jan 30 12:55:40.037471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b-shm.mount: Deactivated successfully. Jan 30 12:55:40.037518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8-shm.mount: Deactivated successfully. Jan 30 12:55:40.040343 containerd[1445]: time="2025-01-30T12:55:40.040218189Z" level=error msg="Failed to destroy network for sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.037566 systemd[1]: run-netns-cni\x2d66c39143\x2d5637\x2db513\x2d7641\x2d380451a9cb5b.mount: Deactivated successfully. Jan 30 12:55:40.037608 systemd[1]: run-netns-cni\x2de7b66144\x2dc987\x2d4bdd\x2dc7e4\x2d0780434fca2a.mount: Deactivated successfully. Jan 30 12:55:40.037647 systemd[1]: run-netns-cni\x2d9cd07ed6\x2dd4b1\x2d0dc0\x2dec0c\x2d04cc16c98b1d.mount: Deactivated successfully. Jan 30 12:55:40.042422 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3-shm.mount: Deactivated successfully. Jan 30 12:55:40.043524 containerd[1445]: time="2025-01-30T12:55:40.043420433Z" level=error msg="encountered an error cleaning up failed sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.043524 containerd[1445]: time="2025-01-30T12:55:40.043513233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.045327 kubelet[2613]: E0130 12:55:40.044263 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.045327 kubelet[2613]: E0130 12:55:40.044658 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:40.045327 kubelet[2613]: E0130 12:55:40.044683 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:40.045995 kubelet[2613]: E0130 12:55:40.044778 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfbbf7fd-c9cpd_calico-apiserver(dad53f38-004e-44aa-900a-c314d9a9e9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfbbf7fd-c9cpd_calico-apiserver(dad53f38-004e-44aa-900a-c314d9a9e9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" podUID="dad53f38-004e-44aa-900a-c314d9a9e9de" Jan 30 12:55:40.079204 containerd[1445]: time="2025-01-30T12:55:40.078242959Z" level=error msg="Failed to destroy network for sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.079881 containerd[1445]: time="2025-01-30T12:55:40.079834121Z" level=error msg="encountered an error cleaning up failed sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.079978 containerd[1445]: time="2025-01-30T12:55:40.079922522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.080219 kubelet[2613]: E0130 12:55:40.080144 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.080219 kubelet[2613]: E0130 12:55:40.080209 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:40.080368 kubelet[2613]: E0130 12:55:40.080228 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:40.080368 kubelet[2613]: E0130 12:55:40.080264 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rx85w_calico-system(c53dd490-f49e-4931-b31d-7e8897227295)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rx85w_calico-system(c53dd490-f49e-4931-b31d-7e8897227295)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rx85w" podUID="c53dd490-f49e-4931-b31d-7e8897227295" Jan 30 12:55:40.081093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361-shm.mount: Deactivated successfully. Jan 30 12:55:40.107248 containerd[1445]: time="2025-01-30T12:55:40.107028758Z" level=error msg="Failed to destroy network for sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.108900 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85-shm.mount: Deactivated successfully. Jan 30 12:55:40.109426 containerd[1445]: time="2025-01-30T12:55:40.109386001Z" level=error msg="encountered an error cleaning up failed sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.109572 containerd[1445]: time="2025-01-30T12:55:40.109548161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.110470 kubelet[2613]: E0130 12:55:40.110259 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.110470 kubelet[2613]: E0130 12:55:40.110321 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:40.110470 kubelet[2613]: E0130 12:55:40.110344 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:40.111404 kubelet[2613]: E0130 12:55:40.110380 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nsdsm_kube-system(0033dce8-2617-4a93-af94-68a801729315)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nsdsm_kube-system(0033dce8-2617-4a93-af94-68a801729315)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nsdsm" podUID="0033dce8-2617-4a93-af94-68a801729315" Jan 30 12:55:40.118363 containerd[1445]: time="2025-01-30T12:55:40.118073773Z" level=error msg="Failed to destroy network for sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.120101 containerd[1445]: time="2025-01-30T12:55:40.119708095Z" level=error msg="encountered an error cleaning up failed sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.120101 containerd[1445]: time="2025-01-30T12:55:40.119789615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.120695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7-shm.mount: Deactivated successfully. Jan 30 12:55:40.121469 kubelet[2613]: E0130 12:55:40.120782 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.121469 kubelet[2613]: E0130 12:55:40.120849 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:40.121469 kubelet[2613]: E0130 12:55:40.120949 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:40.121563 kubelet[2613]: E0130 12:55:40.120991 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2582n_kube-system(f805eb31-5209-4788-beca-600c9f139d8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2582n_kube-system(f805eb31-5209-4788-beca-600c9f139d8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2582n" podUID="f805eb31-5209-4788-beca-600c9f139d8b" Jan 30 12:55:40.126423 containerd[1445]: time="2025-01-30T12:55:40.126186343Z" level=error msg="Failed to destroy network for sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.129928 containerd[1445]: time="2025-01-30T12:55:40.126650744Z" level=error msg="encountered an error cleaning up failed sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.129928 containerd[1445]: time="2025-01-30T12:55:40.126716744Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.130339 kubelet[2613]: E0130 12:55:40.130265 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:40.130409 kubelet[2613]: E0130 12:55:40.130349 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:40.130409 kubelet[2613]: E0130 12:55:40.130369 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:40.130460 kubelet[2613]: E0130 12:55:40.130414 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfbbf7fd-dg5hc_calico-apiserver(cf956daa-f7be-4272-9ce2-d1c78864d112)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfbbf7fd-dg5hc_calico-apiserver(cf956daa-f7be-4272-9ce2-d1c78864d112)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" podUID="cf956daa-f7be-4272-9ce2-d1c78864d112" Jan 30 12:55:40.907769 kubelet[2613]: I0130 12:55:40.907728 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561" Jan 30 12:55:40.908301 containerd[1445]: time="2025-01-30T12:55:40.908262268Z" level=info msg="StopPodSandbox for \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\"" Jan 30 12:55:40.908539 containerd[1445]: time="2025-01-30T12:55:40.908456588Z" level=info msg="Ensure that sandbox 38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561 in task-service has been cleanup successfully" Jan 30 12:55:40.909986 kubelet[2613]: I0130 12:55:40.909956 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361" Jan 30 12:55:40.910078 containerd[1445]: time="2025-01-30T12:55:40.909951190Z" level=info msg="TearDown network for sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\" successfully" Jan 30 12:55:40.910078 containerd[1445]: time="2025-01-30T12:55:40.909982310Z" level=info msg="StopPodSandbox for \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\" returns successfully" Jan 30 12:55:40.911343 containerd[1445]: time="2025-01-30T12:55:40.910353831Z" level=info msg="StopPodSandbox for \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\"" Jan 30 12:55:40.911446 containerd[1445]: time="2025-01-30T12:55:40.911427632Z" level=info msg="TearDown network for sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" successfully" Jan 30 12:55:40.911476 containerd[1445]: time="2025-01-30T12:55:40.911445152Z" level=info msg="StopPodSandbox for \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" returns successfully" Jan 30 12:55:40.911630 containerd[1445]: time="2025-01-30T12:55:40.911606032Z" level=info msg="StopPodSandbox for \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\"" Jan 30 12:55:40.911756 kubelet[2613]: I0130 12:55:40.911729 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85" Jan 30 12:55:40.911815 containerd[1445]: time="2025-01-30T12:55:40.911794832Z" level=info msg="Ensure that sandbox a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361 in task-service has been cleanup successfully" Jan 30 12:55:40.912056 containerd[1445]: time="2025-01-30T12:55:40.911882473Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\"" Jan 30 12:55:40.912056 containerd[1445]: time="2025-01-30T12:55:40.911978913Z" level=info msg="TearDown network for sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\" successfully" Jan 30 12:55:40.912056 containerd[1445]: time="2025-01-30T12:55:40.911992073Z" level=info msg="StopPodSandbox for \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\" returns successfully" Jan 30 12:55:40.912267 containerd[1445]: time="2025-01-30T12:55:40.912247553Z" level=info msg="TearDown network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" successfully" Jan 30 12:55:40.912309 containerd[1445]: time="2025-01-30T12:55:40.912268753Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" returns successfully" Jan 30 12:55:40.912929 containerd[1445]: time="2025-01-30T12:55:40.912621954Z" level=info msg="StopPodSandbox for \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\"" Jan 30 12:55:40.912929 containerd[1445]: time="2025-01-30T12:55:40.912644874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:3,}" Jan 30 12:55:40.912929 containerd[1445]: time="2025-01-30T12:55:40.912779714Z" level=info msg="Ensure that sandbox 71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85 in task-service has been cleanup successfully" Jan 30 12:55:40.912929 containerd[1445]: time="2025-01-30T12:55:40.912638594Z" level=info msg="StopPodSandbox for \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\"" Jan 30 12:55:40.912929 containerd[1445]: time="2025-01-30T12:55:40.912865954Z" level=info msg="TearDown network for sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" successfully" Jan 30 12:55:40.912929 containerd[1445]: time="2025-01-30T12:55:40.912898874Z" level=info msg="StopPodSandbox for \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" returns successfully" Jan 30 12:55:40.913112 containerd[1445]: time="2025-01-30T12:55:40.913096474Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\"" Jan 30 12:55:40.913200 containerd[1445]: time="2025-01-30T12:55:40.913181474Z" level=info msg="TearDown network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" successfully" Jan 30 12:55:40.913200 containerd[1445]: time="2025-01-30T12:55:40.913195874Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" returns successfully" Jan 30 12:55:40.913301 containerd[1445]: time="2025-01-30T12:55:40.913280154Z" level=info msg="TearDown network for sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\" successfully" Jan 30 12:55:40.913356 containerd[1445]: time="2025-01-30T12:55:40.913343355Z" level=info msg="StopPodSandbox for \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\" returns successfully" Jan 30 12:55:40.913713 containerd[1445]: time="2025-01-30T12:55:40.913681235Z" level=info msg="StopPodSandbox for \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\"" Jan 30 12:55:40.913951 containerd[1445]: time="2025-01-30T12:55:40.913907755Z" level=info msg="TearDown network for sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" successfully" Jan 30 12:55:40.913951 containerd[1445]: time="2025-01-30T12:55:40.913943755Z" level=info msg="StopPodSandbox for \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" returns successfully" Jan 30 12:55:40.914382 kubelet[2613]: I0130 12:55:40.914358 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3" Jan 30 12:55:40.914574 containerd[1445]: time="2025-01-30T12:55:40.914353436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:3,}" Jan 30 12:55:40.914861 containerd[1445]: time="2025-01-30T12:55:40.914823677Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\"" Jan 30 12:55:40.915831 containerd[1445]: time="2025-01-30T12:55:40.915055437Z" level=info msg="StopPodSandbox for \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\"" Jan 30 12:55:40.916104 containerd[1445]: time="2025-01-30T12:55:40.916081638Z" level=info msg="Ensure that sandbox 89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3 in task-service has been cleanup successfully" Jan 30 12:55:40.916227 containerd[1445]: time="2025-01-30T12:55:40.915275637Z" level=info msg="TearDown network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" successfully" Jan 30 12:55:40.916264 containerd[1445]: time="2025-01-30T12:55:40.916226958Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" returns successfully" Jan 30 12:55:40.916461 containerd[1445]: time="2025-01-30T12:55:40.916432839Z" level=info msg="TearDown network for sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\" successfully" Jan 30 12:55:40.916461 containerd[1445]: time="2025-01-30T12:55:40.916459599Z" level=info msg="StopPodSandbox for \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\" returns successfully" Jan 30 12:55:40.916515 kubelet[2613]: E0130 12:55:40.916443 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:40.917206 containerd[1445]: time="2025-01-30T12:55:40.916694599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:3,}" Jan 30 12:55:40.917206 containerd[1445]: time="2025-01-30T12:55:40.916767199Z" level=info msg="StopPodSandbox for \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\"" Jan 30 12:55:40.917206 containerd[1445]: time="2025-01-30T12:55:40.916848199Z" level=info msg="TearDown network for sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" successfully" Jan 30 12:55:40.917206 containerd[1445]: time="2025-01-30T12:55:40.916858039Z" level=info msg="StopPodSandbox for \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" returns successfully" Jan 30 12:55:40.917206 containerd[1445]: time="2025-01-30T12:55:40.917100720Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\"" Jan 30 12:55:40.917206 containerd[1445]: time="2025-01-30T12:55:40.917172520Z" level=info msg="TearDown network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" successfully" Jan 30 12:55:40.917206 containerd[1445]: time="2025-01-30T12:55:40.917182600Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" returns successfully" Jan 30 12:55:40.918190 kubelet[2613]: I0130 12:55:40.917495 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7" Jan 30 12:55:40.918274 containerd[1445]: time="2025-01-30T12:55:40.917835321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:3,}" Jan 30 12:55:40.918965 containerd[1445]: time="2025-01-30T12:55:40.918924562Z" level=info msg="StopPodSandbox for \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\"" Jan 30 12:55:40.919112 containerd[1445]: time="2025-01-30T12:55:40.919090762Z" level=info msg="Ensure that sandbox c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7 in task-service has been cleanup successfully" Jan 30 12:55:40.919465 containerd[1445]: time="2025-01-30T12:55:40.919433963Z" level=info msg="TearDown network for sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\" successfully" Jan 30 12:55:40.919465 containerd[1445]: time="2025-01-30T12:55:40.919456963Z" level=info msg="StopPodSandbox for \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\" returns successfully" Jan 30 12:55:40.920274 kubelet[2613]: I0130 12:55:40.919839 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88" Jan 30 12:55:40.920418 containerd[1445]: time="2025-01-30T12:55:40.920387684Z" level=info msg="StopPodSandbox for \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\"" Jan 30 12:55:40.920680 containerd[1445]: time="2025-01-30T12:55:40.920508284Z" level=info msg="StopPodSandbox for \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\"" Jan 30 12:55:40.920680 containerd[1445]: time="2025-01-30T12:55:40.920551564Z" level=info msg="Ensure that sandbox 906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88 in task-service has been cleanup successfully" Jan 30 12:55:40.920680 containerd[1445]: time="2025-01-30T12:55:40.920591884Z" level=info msg="TearDown network for sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" successfully" Jan 30 12:55:40.920680 containerd[1445]: time="2025-01-30T12:55:40.920602004Z" level=info msg="StopPodSandbox for \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" returns successfully" Jan 30 12:55:40.920793 containerd[1445]: time="2025-01-30T12:55:40.920712644Z" level=info msg="TearDown network for sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\" successfully" Jan 30 12:55:40.920793 containerd[1445]: time="2025-01-30T12:55:40.920726404Z" level=info msg="StopPodSandbox for \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\" returns successfully" Jan 30 12:55:40.921031 containerd[1445]: time="2025-01-30T12:55:40.921008325Z" level=info msg="StopPodSandbox for \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\"" Jan 30 12:55:40.921304 containerd[1445]: time="2025-01-30T12:55:40.921091125Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\"" Jan 30 12:55:40.921304 containerd[1445]: time="2025-01-30T12:55:40.921220005Z" level=info msg="TearDown network for sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" successfully" Jan 30 12:55:40.921304 containerd[1445]: time="2025-01-30T12:55:40.921235725Z" level=info msg="StopPodSandbox for \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" returns successfully" Jan 30 12:55:40.921304 containerd[1445]: time="2025-01-30T12:55:40.921245285Z" level=info msg="TearDown network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" successfully" Jan 30 12:55:40.921304 containerd[1445]: time="2025-01-30T12:55:40.921256725Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" returns successfully" Jan 30 12:55:40.921622 containerd[1445]: time="2025-01-30T12:55:40.921532005Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\"" Jan 30 12:55:40.921622 containerd[1445]: time="2025-01-30T12:55:40.921602646Z" level=info msg="TearDown network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" successfully" Jan 30 12:55:40.921622 containerd[1445]: time="2025-01-30T12:55:40.921611366Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" returns successfully" Jan 30 12:55:40.921684 kubelet[2613]: E0130 12:55:40.921413 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:40.921984 containerd[1445]: time="2025-01-30T12:55:40.921727246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:3,}" Jan 30 12:55:40.923190 containerd[1445]: time="2025-01-30T12:55:40.923156288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:3,}" Jan 30 12:55:41.035743 systemd[1]: run-netns-cni\x2d3aee93d1\x2d9a83\x2d4d62\x2dace1\x2daf608ed219b0.mount: Deactivated successfully. Jan 30 12:55:41.035849 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88-shm.mount: Deactivated successfully. Jan 30 12:55:41.035922 systemd[1]: run-netns-cni\x2db1a92258\x2df83c\x2d39d0\x2d116f\x2d1e6ed67dcb92.mount: Deactivated successfully. Jan 30 12:55:41.035970 systemd[1]: run-netns-cni\x2d6a7eca52\x2d1ff6\x2db6db\x2dd404\x2d10a8295617f2.mount: Deactivated successfully. Jan 30 12:55:41.036011 systemd[1]: run-netns-cni\x2dc9fc34a0\x2d9ee6\x2dad3f\x2dfeda\x2d9f41a24e75a1.mount: Deactivated successfully. Jan 30 12:55:41.036056 systemd[1]: run-netns-cni\x2d4740186b\x2d5626\x2dff69\x2dc063\x2dd62b9b4052b7.mount: Deactivated successfully. Jan 30 12:55:41.036098 systemd[1]: run-netns-cni\x2d712240ac\x2d4eb8\x2d9359\x2d2f69\x2d97affacba809.mount: Deactivated successfully. Jan 30 12:55:41.133686 containerd[1445]: time="2025-01-30T12:55:41.133172397Z" level=error msg="Failed to destroy network for sandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.134457 containerd[1445]: time="2025-01-30T12:55:41.134422359Z" level=error msg="encountered an error cleaning up failed sandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.134523 containerd[1445]: time="2025-01-30T12:55:41.134488879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.135010 kubelet[2613]: E0130 12:55:41.134749 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.135010 kubelet[2613]: E0130 12:55:41.134882 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:41.135010 kubelet[2613]: E0130 12:55:41.134917 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:41.135200 kubelet[2613]: E0130 12:55:41.135001 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfbbf7fd-c9cpd_calico-apiserver(dad53f38-004e-44aa-900a-c314d9a9e9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfbbf7fd-c9cpd_calico-apiserver(dad53f38-004e-44aa-900a-c314d9a9e9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" podUID="dad53f38-004e-44aa-900a-c314d9a9e9de" Jan 30 12:55:41.225526 containerd[1445]: time="2025-01-30T12:55:41.225442673Z" level=error msg="Failed to destroy network for sandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.225848 containerd[1445]: time="2025-01-30T12:55:41.225819553Z" level=error msg="encountered an error cleaning up failed sandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.225942 containerd[1445]: time="2025-01-30T12:55:41.225921273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.226298 kubelet[2613]: E0130 12:55:41.226249 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.226364 kubelet[2613]: E0130 12:55:41.226344 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:41.226391 kubelet[2613]: E0130 12:55:41.226371 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:41.226467 kubelet[2613]: E0130 12:55:41.226443 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2582n_kube-system(f805eb31-5209-4788-beca-600c9f139d8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2582n_kube-system(f805eb31-5209-4788-beca-600c9f139d8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2582n" podUID="f805eb31-5209-4788-beca-600c9f139d8b" Jan 30 12:55:41.240218 containerd[1445]: time="2025-01-30T12:55:41.240165171Z" level=error msg="Failed to destroy network for sandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.240571 containerd[1445]: time="2025-01-30T12:55:41.240536412Z" level=error msg="encountered an error cleaning up failed sandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.240622 containerd[1445]: time="2025-01-30T12:55:41.240602212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.240862 kubelet[2613]: E0130 12:55:41.240818 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.240945 kubelet[2613]: E0130 12:55:41.240874 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:41.240945 kubelet[2613]: E0130 12:55:41.240911 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:41.241009 kubelet[2613]: E0130 12:55:41.240953 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nsdsm_kube-system(0033dce8-2617-4a93-af94-68a801729315)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nsdsm_kube-system(0033dce8-2617-4a93-af94-68a801729315)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nsdsm" podUID="0033dce8-2617-4a93-af94-68a801729315" Jan 30 12:55:41.244458 containerd[1445]: time="2025-01-30T12:55:41.244416336Z" level=error msg="Failed to destroy network for sandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.247084 containerd[1445]: time="2025-01-30T12:55:41.247039540Z" level=error msg="encountered an error cleaning up failed sandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.247306 containerd[1445]: time="2025-01-30T12:55:41.247282180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.247704 kubelet[2613]: E0130 12:55:41.247650 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.247775 kubelet[2613]: E0130 12:55:41.247718 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:41.247775 kubelet[2613]: E0130 12:55:41.247741 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:41.247819 kubelet[2613]: E0130 12:55:41.247778 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rx85w_calico-system(c53dd490-f49e-4931-b31d-7e8897227295)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rx85w_calico-system(c53dd490-f49e-4931-b31d-7e8897227295)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rx85w" podUID="c53dd490-f49e-4931-b31d-7e8897227295" Jan 30 12:55:41.254957 containerd[1445]: time="2025-01-30T12:55:41.254858989Z" level=error msg="Failed to destroy network for sandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.256080 containerd[1445]: time="2025-01-30T12:55:41.256032631Z" level=error msg="Failed to destroy network for sandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.256565 containerd[1445]: time="2025-01-30T12:55:41.256525272Z" level=error msg="encountered an error cleaning up failed sandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.256624 containerd[1445]: time="2025-01-30T12:55:41.256590752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.256945 kubelet[2613]: E0130 12:55:41.256799 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.256945 kubelet[2613]: E0130 12:55:41.256859 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:41.256945 kubelet[2613]: E0130 12:55:41.256881 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:41.257304 kubelet[2613]: E0130 12:55:41.256983 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59db857b5c-zqhl4_calico-system(97079026-7789-44af-adb9-0f82fffc0d08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59db857b5c-zqhl4_calico-system(97079026-7789-44af-adb9-0f82fffc0d08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" podUID="97079026-7789-44af-adb9-0f82fffc0d08" Jan 30 12:55:41.262752 containerd[1445]: time="2025-01-30T12:55:41.262707119Z" level=error msg="encountered an error cleaning up failed sandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.262956 containerd[1445]: time="2025-01-30T12:55:41.262928640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.263296 kubelet[2613]: E0130 12:55:41.263261 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:41.263353 kubelet[2613]: E0130 12:55:41.263315 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:41.263353 kubelet[2613]: E0130 12:55:41.263335 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:41.263436 kubelet[2613]: E0130 12:55:41.263388 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfbbf7fd-dg5hc_calico-apiserver(cf956daa-f7be-4272-9ce2-d1c78864d112)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfbbf7fd-dg5hc_calico-apiserver(cf956daa-f7be-4272-9ce2-d1c78864d112)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" podUID="cf956daa-f7be-4272-9ce2-d1c78864d112" Jan 30 12:55:41.931419 kubelet[2613]: I0130 12:55:41.931382 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d" Jan 30 12:55:41.933786 containerd[1445]: time="2025-01-30T12:55:41.932217437Z" level=info msg="StopPodSandbox for \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\"" Jan 30 12:55:41.933786 containerd[1445]: time="2025-01-30T12:55:41.932416278Z" level=info msg="Ensure that sandbox 9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d in task-service has been cleanup successfully" Jan 30 12:55:41.933786 containerd[1445]: time="2025-01-30T12:55:41.932620238Z" level=info msg="TearDown network for sandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\" successfully" Jan 30 12:55:41.933786 containerd[1445]: time="2025-01-30T12:55:41.932636438Z" level=info msg="StopPodSandbox for \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\" returns successfully" Jan 30 12:55:41.933786 containerd[1445]: time="2025-01-30T12:55:41.933040679Z" level=info msg="StopPodSandbox for \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\"" Jan 30 12:55:41.933786 containerd[1445]: time="2025-01-30T12:55:41.933114199Z" level=info msg="TearDown network for sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\" successfully" Jan 30 12:55:41.933786 containerd[1445]: time="2025-01-30T12:55:41.933135599Z" level=info msg="StopPodSandbox for \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\" returns successfully" Jan 30 12:55:41.934278 containerd[1445]: time="2025-01-30T12:55:41.934073640Z" level=info msg="StopPodSandbox for \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\"" Jan 30 12:55:41.934278 containerd[1445]: time="2025-01-30T12:55:41.934167880Z" level=info msg="TearDown network for sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" successfully" Jan 30 12:55:41.934278 containerd[1445]: time="2025-01-30T12:55:41.934178720Z" level=info msg="StopPodSandbox for \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" returns successfully" Jan 30 12:55:41.935086 containerd[1445]: time="2025-01-30T12:55:41.935049801Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\"" Jan 30 12:55:41.935301 containerd[1445]: time="2025-01-30T12:55:41.935233161Z" level=info msg="TearDown network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" successfully" Jan 30 12:55:41.935301 containerd[1445]: time="2025-01-30T12:55:41.935251001Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" returns successfully" Jan 30 12:55:41.935694 kubelet[2613]: I0130 12:55:41.935673 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7" Jan 30 12:55:41.938475 containerd[1445]: time="2025-01-30T12:55:41.937589804Z" level=info msg="StopPodSandbox for \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\"" Jan 30 12:55:41.938475 containerd[1445]: time="2025-01-30T12:55:41.937762644Z" level=info msg="Ensure that sandbox 7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7 in task-service has been cleanup successfully" Jan 30 12:55:41.939031 containerd[1445]: time="2025-01-30T12:55:41.938991046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:4,}" Jan 30 12:55:41.939838 containerd[1445]: time="2025-01-30T12:55:41.939809807Z" level=info msg="TearDown network for sandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\" successfully" Jan 30 12:55:41.940025 containerd[1445]: time="2025-01-30T12:55:41.940004407Z" level=info msg="StopPodSandbox for \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\" returns successfully" Jan 30 12:55:41.943256 containerd[1445]: time="2025-01-30T12:55:41.943218971Z" level=info msg="StopPodSandbox for \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\"" Jan 30 12:55:41.943506 containerd[1445]: time="2025-01-30T12:55:41.943473572Z" level=info msg="TearDown network for sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\" successfully" Jan 30 12:55:41.943573 containerd[1445]: time="2025-01-30T12:55:41.943559932Z" level=info msg="StopPodSandbox for \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\" returns successfully" Jan 30 12:55:41.943974 containerd[1445]: time="2025-01-30T12:55:41.943858172Z" level=info msg="StopPodSandbox for \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\"" Jan 30 12:55:41.943974 containerd[1445]: time="2025-01-30T12:55:41.943957932Z" level=info msg="TearDown network for sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" successfully" Jan 30 12:55:41.943974 containerd[1445]: time="2025-01-30T12:55:41.943968332Z" level=info msg="StopPodSandbox for \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" returns successfully" Jan 30 12:55:41.944225 containerd[1445]: time="2025-01-30T12:55:41.944199092Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\"" Jan 30 12:55:41.944273 kubelet[2613]: I0130 12:55:41.944253 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25" Jan 30 12:55:41.944315 containerd[1445]: time="2025-01-30T12:55:41.944267613Z" level=info msg="TearDown network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" successfully" Jan 30 12:55:41.944315 containerd[1445]: time="2025-01-30T12:55:41.944278573Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" returns successfully" Jan 30 12:55:41.944858 containerd[1445]: time="2025-01-30T12:55:41.944661453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:4,}" Jan 30 12:55:41.945184 containerd[1445]: time="2025-01-30T12:55:41.944970773Z" level=info msg="StopPodSandbox for \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\"" Jan 30 12:55:41.945263 containerd[1445]: time="2025-01-30T12:55:41.945234534Z" level=info msg="Ensure that sandbox 13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25 in task-service has been cleanup successfully" Jan 30 12:55:41.945797 containerd[1445]: time="2025-01-30T12:55:41.945678494Z" level=info msg="TearDown network for sandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\" successfully" Jan 30 12:55:41.945797 containerd[1445]: time="2025-01-30T12:55:41.945792934Z" level=info msg="StopPodSandbox for \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\" returns successfully" Jan 30 12:55:41.946136 containerd[1445]: time="2025-01-30T12:55:41.946091455Z" level=info msg="StopPodSandbox for \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\"" Jan 30 12:55:41.946348 containerd[1445]: time="2025-01-30T12:55:41.946327055Z" level=info msg="TearDown network for sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\" successfully" Jan 30 12:55:41.946626 containerd[1445]: time="2025-01-30T12:55:41.946573575Z" level=info msg="StopPodSandbox for \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\" returns successfully" Jan 30 12:55:41.948098 containerd[1445]: time="2025-01-30T12:55:41.948035097Z" level=info msg="StopPodSandbox for \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\"" Jan 30 12:55:41.948353 containerd[1445]: time="2025-01-30T12:55:41.948256138Z" level=info msg="TearDown network for sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" successfully" Jan 30 12:55:41.948353 containerd[1445]: time="2025-01-30T12:55:41.948278258Z" level=info msg="StopPodSandbox for \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" returns successfully" Jan 30 12:55:41.948650 containerd[1445]: time="2025-01-30T12:55:41.948621338Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\"" Jan 30 12:55:41.948731 containerd[1445]: time="2025-01-30T12:55:41.948714778Z" level=info msg="TearDown network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" successfully" Jan 30 12:55:41.948768 containerd[1445]: time="2025-01-30T12:55:41.948729978Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" returns successfully" Jan 30 12:55:41.949270 kubelet[2613]: I0130 12:55:41.949056 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52" Jan 30 12:55:41.949944 containerd[1445]: time="2025-01-30T12:55:41.949615779Z" level=info msg="StopPodSandbox for \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\"" Jan 30 12:55:41.949944 containerd[1445]: time="2025-01-30T12:55:41.949786059Z" level=info msg="Ensure that sandbox f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52 in task-service has been cleanup successfully" Jan 30 12:55:41.950080 containerd[1445]: time="2025-01-30T12:55:41.950039740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:4,}" Jan 30 12:55:41.950287 containerd[1445]: time="2025-01-30T12:55:41.950061220Z" level=info msg="TearDown network for sandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\" successfully" Jan 30 12:55:41.950287 containerd[1445]: time="2025-01-30T12:55:41.950279500Z" level=info msg="StopPodSandbox for \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\" returns successfully" Jan 30 12:55:41.950936 containerd[1445]: time="2025-01-30T12:55:41.950909501Z" level=info msg="StopPodSandbox for \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\"" Jan 30 12:55:41.951015 containerd[1445]: time="2025-01-30T12:55:41.951000541Z" level=info msg="TearDown network for sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\" successfully" Jan 30 12:55:41.951048 containerd[1445]: time="2025-01-30T12:55:41.951015021Z" level=info msg="StopPodSandbox for \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\" returns successfully" Jan 30 12:55:41.951316 containerd[1445]: time="2025-01-30T12:55:41.951289621Z" level=info msg="StopPodSandbox for \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\"" Jan 30 12:55:41.951396 containerd[1445]: time="2025-01-30T12:55:41.951381181Z" level=info msg="TearDown network for sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" successfully" Jan 30 12:55:41.951430 containerd[1445]: time="2025-01-30T12:55:41.951395701Z" level=info msg="StopPodSandbox for \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" returns successfully" Jan 30 12:55:41.951928 containerd[1445]: time="2025-01-30T12:55:41.951905342Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\"" Jan 30 12:55:41.952001 containerd[1445]: time="2025-01-30T12:55:41.951984342Z" level=info msg="TearDown network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" successfully" Jan 30 12:55:41.952042 containerd[1445]: time="2025-01-30T12:55:41.952000742Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" returns successfully" Jan 30 12:55:41.952233 kubelet[2613]: E0130 12:55:41.952206 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:41.952639 containerd[1445]: time="2025-01-30T12:55:41.952572943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:4,}" Jan 30 12:55:41.954136 kubelet[2613]: I0130 12:55:41.954101 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269" Jan 30 12:55:41.954759 containerd[1445]: time="2025-01-30T12:55:41.954717466Z" level=info msg="StopPodSandbox for \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\"" Jan 30 12:55:41.955211 containerd[1445]: time="2025-01-30T12:55:41.955086306Z" level=info msg="Ensure that sandbox 117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269 in task-service has been cleanup successfully" Jan 30 12:55:41.955539 containerd[1445]: time="2025-01-30T12:55:41.955507787Z" level=info msg="TearDown network for sandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\" successfully" Jan 30 12:55:41.955539 containerd[1445]: time="2025-01-30T12:55:41.955534787Z" level=info msg="StopPodSandbox for \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\" returns successfully" Jan 30 12:55:41.956061 containerd[1445]: time="2025-01-30T12:55:41.956000867Z" level=info msg="StopPodSandbox for \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\"" Jan 30 12:55:41.956113 containerd[1445]: time="2025-01-30T12:55:41.956101627Z" level=info msg="TearDown network for sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\" successfully" Jan 30 12:55:41.956173 containerd[1445]: time="2025-01-30T12:55:41.956114107Z" level=info msg="StopPodSandbox for \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\" returns successfully" Jan 30 12:55:41.956645 containerd[1445]: time="2025-01-30T12:55:41.956498788Z" level=info msg="StopPodSandbox for \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\"" Jan 30 12:55:41.956645 containerd[1445]: time="2025-01-30T12:55:41.956583108Z" level=info msg="TearDown network for sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" successfully" Jan 30 12:55:41.956645 containerd[1445]: time="2025-01-30T12:55:41.956592628Z" level=info msg="StopPodSandbox for \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" returns successfully" Jan 30 12:55:41.957561 containerd[1445]: time="2025-01-30T12:55:41.957532509Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\"" Jan 30 12:55:41.957718 containerd[1445]: time="2025-01-30T12:55:41.957702029Z" level=info msg="TearDown network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" successfully" Jan 30 12:55:41.957832 containerd[1445]: time="2025-01-30T12:55:41.957773709Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" returns successfully" Jan 30 12:55:41.958748 kubelet[2613]: I0130 12:55:41.958718 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7" Jan 30 12:55:41.960466 containerd[1445]: time="2025-01-30T12:55:41.958967671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:4,}" Jan 30 12:55:41.960466 containerd[1445]: time="2025-01-30T12:55:41.959365511Z" level=info msg="StopPodSandbox for \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\"" Jan 30 12:55:41.960466 containerd[1445]: time="2025-01-30T12:55:41.959800792Z" level=info msg="Ensure that sandbox e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7 in task-service has been cleanup successfully" Jan 30 12:55:41.960466 containerd[1445]: time="2025-01-30T12:55:41.960016472Z" level=info msg="TearDown network for sandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\" successfully" Jan 30 12:55:41.960466 containerd[1445]: time="2025-01-30T12:55:41.960032712Z" level=info msg="StopPodSandbox for \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\" returns successfully" Jan 30 12:55:41.960796 containerd[1445]: time="2025-01-30T12:55:41.960769953Z" level=info msg="StopPodSandbox for \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\"" Jan 30 12:55:41.960955 containerd[1445]: time="2025-01-30T12:55:41.960937433Z" level=info msg="TearDown network for sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\" successfully" Jan 30 12:55:41.961050 containerd[1445]: time="2025-01-30T12:55:41.961008434Z" level=info msg="StopPodSandbox for \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\" returns successfully" Jan 30 12:55:41.961418 containerd[1445]: time="2025-01-30T12:55:41.961390914Z" level=info msg="StopPodSandbox for \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\"" Jan 30 12:55:41.961650 containerd[1445]: time="2025-01-30T12:55:41.961632594Z" level=info msg="TearDown network for sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" successfully" Jan 30 12:55:41.961731 containerd[1445]: time="2025-01-30T12:55:41.961716394Z" level=info msg="StopPodSandbox for \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" returns successfully" Jan 30 12:55:41.962008 containerd[1445]: time="2025-01-30T12:55:41.961986715Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\"" Jan 30 12:55:41.962186 containerd[1445]: time="2025-01-30T12:55:41.962168435Z" level=info msg="TearDown network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" successfully" Jan 30 12:55:41.962262 containerd[1445]: time="2025-01-30T12:55:41.962248275Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" returns successfully" Jan 30 12:55:41.962819 kubelet[2613]: E0130 12:55:41.962791 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:41.963254 containerd[1445]: time="2025-01-30T12:55:41.963210556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:4,}" Jan 30 12:55:42.030574 systemd[1]: run-netns-cni\x2dcef671ad\x2daf2d\x2df3c2\x2dc9b8\x2d43d671b25170.mount: Deactivated successfully. Jan 30 12:55:42.030672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52-shm.mount: Deactivated successfully. Jan 30 12:55:42.030730 systemd[1]: run-netns-cni\x2d93ba82aa\x2d7d60\x2de3ce\x2d7a61\x2d661d22207f00.mount: Deactivated successfully. Jan 30 12:55:42.030777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25-shm.mount: Deactivated successfully. Jan 30 12:55:42.030830 systemd[1]: run-netns-cni\x2d5a6f3847\x2da1c3\x2d8f02\x2d99a9\x2dfe054e9c629b.mount: Deactivated successfully. Jan 30 12:55:42.030874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7-shm.mount: Deactivated successfully. Jan 30 12:55:42.030935 systemd[1]: run-netns-cni\x2d74304a80\x2dc88f\x2dfb7a\x2d40ec\x2d2e01e9f4f9d8.mount: Deactivated successfully. Jan 30 12:55:42.030978 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269-shm.mount: Deactivated successfully. Jan 30 12:55:42.376425 containerd[1445]: time="2025-01-30T12:55:42.376079084Z" level=error msg="Failed to destroy network for sandbox \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.380927 containerd[1445]: time="2025-01-30T12:55:42.380825249Z" level=error msg="encountered an error cleaning up failed sandbox \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.381095 containerd[1445]: time="2025-01-30T12:55:42.381070130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.382437 kubelet[2613]: E0130 12:55:42.382077 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.382437 kubelet[2613]: E0130 12:55:42.382149 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:42.382437 kubelet[2613]: E0130 12:55:42.382174 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2582n" Jan 30 12:55:42.382616 kubelet[2613]: E0130 12:55:42.382213 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2582n_kube-system(f805eb31-5209-4788-beca-600c9f139d8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2582n_kube-system(f805eb31-5209-4788-beca-600c9f139d8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2582n" podUID="f805eb31-5209-4788-beca-600c9f139d8b" Jan 30 12:55:42.386248 containerd[1445]: time="2025-01-30T12:55:42.386208656Z" level=error msg="Failed to destroy network for sandbox \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.386811 containerd[1445]: time="2025-01-30T12:55:42.386653176Z" level=error msg="encountered an error cleaning up failed sandbox \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.386811 containerd[1445]: time="2025-01-30T12:55:42.386719536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.387309 kubelet[2613]: E0130 12:55:42.387108 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.387309 kubelet[2613]: E0130 12:55:42.387203 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:42.387309 kubelet[2613]: E0130 12:55:42.387231 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" Jan 30 12:55:42.387430 kubelet[2613]: E0130 12:55:42.387274 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfbbf7fd-dg5hc_calico-apiserver(cf956daa-f7be-4272-9ce2-d1c78864d112)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfbbf7fd-dg5hc_calico-apiserver(cf956daa-f7be-4272-9ce2-d1c78864d112)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" podUID="cf956daa-f7be-4272-9ce2-d1c78864d112" Jan 30 12:55:42.394131 containerd[1445]: time="2025-01-30T12:55:42.393983385Z" level=error msg="Failed to destroy network for sandbox \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.394590 containerd[1445]: time="2025-01-30T12:55:42.394433905Z" level=error msg="encountered an error cleaning up failed sandbox \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.394590 containerd[1445]: time="2025-01-30T12:55:42.394495705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.394951 kubelet[2613]: E0130 12:55:42.394783 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.394951 kubelet[2613]: E0130 12:55:42.394834 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:42.394951 kubelet[2613]: E0130 12:55:42.394859 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rx85w" Jan 30 12:55:42.395183 kubelet[2613]: E0130 12:55:42.395117 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rx85w_calico-system(c53dd490-f49e-4931-b31d-7e8897227295)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rx85w_calico-system(c53dd490-f49e-4931-b31d-7e8897227295)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rx85w" podUID="c53dd490-f49e-4931-b31d-7e8897227295" Jan 30 12:55:42.415146 containerd[1445]: time="2025-01-30T12:55:42.414991610Z" level=error msg="Failed to destroy network for sandbox \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.415546 containerd[1445]: time="2025-01-30T12:55:42.415519330Z" level=error msg="encountered an error cleaning up failed sandbox \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.415703 containerd[1445]: time="2025-01-30T12:55:42.415682570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.416065 kubelet[2613]: E0130 12:55:42.415993 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.416065 kubelet[2613]: E0130 12:55:42.416055 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:42.416178 kubelet[2613]: E0130 12:55:42.416076 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" Jan 30 12:55:42.416178 kubelet[2613]: E0130 12:55:42.416113 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-ccfbbf7fd-c9cpd_calico-apiserver(dad53f38-004e-44aa-900a-c314d9a9e9de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-ccfbbf7fd-c9cpd_calico-apiserver(dad53f38-004e-44aa-900a-c314d9a9e9de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" podUID="dad53f38-004e-44aa-900a-c314d9a9e9de" Jan 30 12:55:42.420921 containerd[1445]: time="2025-01-30T12:55:42.420726936Z" level=error msg="Failed to destroy network for sandbox \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.421383 containerd[1445]: time="2025-01-30T12:55:42.421345897Z" level=error msg="encountered an error cleaning up failed sandbox \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.421453 containerd[1445]: time="2025-01-30T12:55:42.421417017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.421649 kubelet[2613]: E0130 12:55:42.421615 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.421739 kubelet[2613]: E0130 12:55:42.421673 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:42.421739 kubelet[2613]: E0130 12:55:42.421692 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" Jan 30 12:55:42.421820 kubelet[2613]: E0130 12:55:42.421733 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59db857b5c-zqhl4_calico-system(97079026-7789-44af-adb9-0f82fffc0d08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59db857b5c-zqhl4_calico-system(97079026-7789-44af-adb9-0f82fffc0d08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" podUID="97079026-7789-44af-adb9-0f82fffc0d08" Jan 30 12:55:42.433736 containerd[1445]: time="2025-01-30T12:55:42.433677151Z" level=error msg="Failed to destroy network for sandbox \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.434234 containerd[1445]: time="2025-01-30T12:55:42.434059992Z" level=error msg="encountered an error cleaning up failed sandbox \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.434234 containerd[1445]: time="2025-01-30T12:55:42.434135552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.434460 kubelet[2613]: E0130 12:55:42.434382 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 12:55:42.434516 kubelet[2613]: E0130 12:55:42.434499 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:42.434553 kubelet[2613]: E0130 12:55:42.434523 2613 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nsdsm" Jan 30 12:55:42.434587 kubelet[2613]: E0130 12:55:42.434561 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nsdsm_kube-system(0033dce8-2617-4a93-af94-68a801729315)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nsdsm_kube-system(0033dce8-2617-4a93-af94-68a801729315)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nsdsm" podUID="0033dce8-2617-4a93-af94-68a801729315" Jan 30 12:55:42.517758 containerd[1445]: time="2025-01-30T12:55:42.517703890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:42.518556 containerd[1445]: time="2025-01-30T12:55:42.518506251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 12:55:42.519525 containerd[1445]: time="2025-01-30T12:55:42.519485852Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:42.522163 containerd[1445]: time="2025-01-30T12:55:42.522113815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:42.523158 containerd[1445]: time="2025-01-30T12:55:42.523114376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.687428012s" Jan 30 12:55:42.523158 containerd[1445]: time="2025-01-30T12:55:42.523155857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 12:55:42.531706 containerd[1445]: time="2025-01-30T12:55:42.531664826Z" level=info msg="CreateContainer within sandbox \"62479c7f33b2da2006319edbfa849ab71eb4479268503c68155e7aceb19acda9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 12:55:42.544775 containerd[1445]: time="2025-01-30T12:55:42.544727722Z" level=info msg="CreateContainer within sandbox \"62479c7f33b2da2006319edbfa849ab71eb4479268503c68155e7aceb19acda9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f5a04db30250bbdd00823b992421ab5e0c70879073205a61e6a8f237ed7edf3e\"" Jan 30 12:55:42.545536 containerd[1445]: time="2025-01-30T12:55:42.545475443Z" level=info msg="StartContainer for \"f5a04db30250bbdd00823b992421ab5e0c70879073205a61e6a8f237ed7edf3e\"" Jan 30 12:55:42.606114 systemd[1]: Started cri-containerd-f5a04db30250bbdd00823b992421ab5e0c70879073205a61e6a8f237ed7edf3e.scope - libcontainer container f5a04db30250bbdd00823b992421ab5e0c70879073205a61e6a8f237ed7edf3e. Jan 30 12:55:42.647148 containerd[1445]: time="2025-01-30T12:55:42.647022602Z" level=info msg="StartContainer for \"f5a04db30250bbdd00823b992421ab5e0c70879073205a61e6a8f237ed7edf3e\" returns successfully" Jan 30 12:55:42.872843 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 12:55:42.872963 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 12:55:42.965189 kubelet[2613]: I0130 12:55:42.964958 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf" Jan 30 12:55:42.967953 containerd[1445]: time="2025-01-30T12:55:42.966574497Z" level=info msg="StopPodSandbox for \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\"" Jan 30 12:55:42.967953 containerd[1445]: time="2025-01-30T12:55:42.967178458Z" level=info msg="Ensure that sandbox e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf in task-service has been cleanup successfully" Jan 30 12:55:42.967953 containerd[1445]: time="2025-01-30T12:55:42.967480858Z" level=info msg="TearDown network for sandbox \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\" successfully" Jan 30 12:55:42.967953 containerd[1445]: time="2025-01-30T12:55:42.967498258Z" level=info msg="StopPodSandbox for \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\" returns successfully" Jan 30 12:55:42.970363 containerd[1445]: time="2025-01-30T12:55:42.970312581Z" level=info msg="StopPodSandbox for \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\"" Jan 30 12:55:42.970678 containerd[1445]: time="2025-01-30T12:55:42.970651182Z" level=info msg="TearDown network for sandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\" successfully" Jan 30 12:55:42.970678 containerd[1445]: time="2025-01-30T12:55:42.970672102Z" level=info msg="StopPodSandbox for \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\" returns successfully" Jan 30 12:55:42.971252 containerd[1445]: time="2025-01-30T12:55:42.971223022Z" level=info msg="StopPodSandbox for \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\"" Jan 30 12:55:42.971389 containerd[1445]: time="2025-01-30T12:55:42.971318782Z" level=info msg="TearDown network for sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\" successfully" Jan 30 12:55:42.971389 containerd[1445]: time="2025-01-30T12:55:42.971329543Z" level=info msg="StopPodSandbox for \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\" returns successfully" Jan 30 12:55:42.972532 containerd[1445]: time="2025-01-30T12:55:42.972483464Z" level=info msg="StopPodSandbox for \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\"" Jan 30 12:55:42.972612 containerd[1445]: time="2025-01-30T12:55:42.972578104Z" level=info msg="TearDown network for sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" successfully" Jan 30 12:55:42.972612 containerd[1445]: time="2025-01-30T12:55:42.972588744Z" level=info msg="StopPodSandbox for \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" returns successfully" Jan 30 12:55:42.973281 containerd[1445]: time="2025-01-30T12:55:42.973249225Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\"" Jan 30 12:55:42.973365 containerd[1445]: time="2025-01-30T12:55:42.973349465Z" level=info msg="TearDown network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" successfully" Jan 30 12:55:42.973365 containerd[1445]: time="2025-01-30T12:55:42.973362985Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" returns successfully" Jan 30 12:55:42.976916 containerd[1445]: time="2025-01-30T12:55:42.976786149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:5,}" Jan 30 12:55:42.977066 kubelet[2613]: I0130 12:55:42.977043 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634" Jan 30 12:55:42.978196 containerd[1445]: time="2025-01-30T12:55:42.978163511Z" level=info msg="StopPodSandbox for \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\"" Jan 30 12:55:42.978468 containerd[1445]: time="2025-01-30T12:55:42.978445431Z" level=info msg="Ensure that sandbox d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634 in task-service has been cleanup successfully" Jan 30 12:55:42.978736 containerd[1445]: time="2025-01-30T12:55:42.978697231Z" level=info msg="TearDown network for sandbox \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\" successfully" Jan 30 12:55:42.978768 containerd[1445]: time="2025-01-30T12:55:42.978734871Z" level=info msg="StopPodSandbox for \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\" returns successfully" Jan 30 12:55:42.979323 containerd[1445]: time="2025-01-30T12:55:42.979294472Z" level=info msg="StopPodSandbox for \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\"" Jan 30 12:55:42.979401 containerd[1445]: time="2025-01-30T12:55:42.979386152Z" level=info msg="TearDown network for sandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\" successfully" Jan 30 12:55:42.979444 containerd[1445]: time="2025-01-30T12:55:42.979400352Z" level=info msg="StopPodSandbox for \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\" returns successfully" Jan 30 12:55:42.980066 containerd[1445]: time="2025-01-30T12:55:42.980040873Z" level=info msg="StopPodSandbox for \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\"" Jan 30 12:55:42.980325 containerd[1445]: time="2025-01-30T12:55:42.980159473Z" level=info msg="TearDown network for sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\" successfully" Jan 30 12:55:42.980354 containerd[1445]: time="2025-01-30T12:55:42.980328993Z" level=info msg="StopPodSandbox for \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\" returns successfully" Jan 30 12:55:42.982639 containerd[1445]: time="2025-01-30T12:55:42.982549996Z" level=info msg="StopPodSandbox for \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\"" Jan 30 12:55:42.982733 containerd[1445]: time="2025-01-30T12:55:42.982649116Z" level=info msg="TearDown network for sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" successfully" Jan 30 12:55:42.982733 containerd[1445]: time="2025-01-30T12:55:42.982659756Z" level=info msg="StopPodSandbox for \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" returns successfully" Jan 30 12:55:42.983061 containerd[1445]: time="2025-01-30T12:55:42.983040156Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\"" Jan 30 12:55:42.983231 containerd[1445]: time="2025-01-30T12:55:42.983141956Z" level=info msg="TearDown network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" successfully" Jan 30 12:55:42.983231 containerd[1445]: time="2025-01-30T12:55:42.983156396Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" returns successfully" Jan 30 12:55:42.983975 containerd[1445]: time="2025-01-30T12:55:42.983683277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:5,}" Jan 30 12:55:42.987243 kubelet[2613]: E0130 12:55:42.987144 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:43.002378 kubelet[2613]: I0130 12:55:43.002347 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f" Jan 30 12:55:43.003826 containerd[1445]: time="2025-01-30T12:55:43.003408180Z" level=info msg="StopPodSandbox for \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\"" Jan 30 12:55:43.003826 containerd[1445]: time="2025-01-30T12:55:43.003650100Z" level=info msg="Ensure that sandbox 159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f in task-service has been cleanup successfully" Jan 30 12:55:43.004342 containerd[1445]: time="2025-01-30T12:55:43.004295181Z" level=info msg="TearDown network for sandbox \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\" successfully" Jan 30 12:55:43.004342 containerd[1445]: time="2025-01-30T12:55:43.004322621Z" level=info msg="StopPodSandbox for \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\" returns successfully" Jan 30 12:55:43.009402 kubelet[2613]: I0130 12:55:43.009334 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pxzgm" podStartSLOduration=1.846719502 podStartE2EDuration="17.009315627s" podCreationTimestamp="2025-01-30 12:55:26 +0000 UTC" firstStartedPulling="2025-01-30 12:55:27.361273132 +0000 UTC m=+23.806215373" lastFinishedPulling="2025-01-30 12:55:42.523869257 +0000 UTC m=+38.968811498" observedRunningTime="2025-01-30 12:55:43.005426422 +0000 UTC m=+39.450368663" watchObservedRunningTime="2025-01-30 12:55:43.009315627 +0000 UTC m=+39.454257828" Jan 30 12:55:43.010711 containerd[1445]: time="2025-01-30T12:55:43.009768387Z" level=info msg="StopPodSandbox for \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\"" Jan 30 12:55:43.010711 containerd[1445]: time="2025-01-30T12:55:43.009906667Z" level=info msg="TearDown network for sandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\" successfully" Jan 30 12:55:43.010711 containerd[1445]: time="2025-01-30T12:55:43.009920227Z" level=info msg="StopPodSandbox for \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\" returns successfully" Jan 30 12:55:43.010711 containerd[1445]: time="2025-01-30T12:55:43.010575948Z" level=info msg="StopPodSandbox for \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\"" Jan 30 12:55:43.010711 containerd[1445]: time="2025-01-30T12:55:43.010655908Z" level=info msg="TearDown network for sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\" successfully" Jan 30 12:55:43.010711 containerd[1445]: time="2025-01-30T12:55:43.010666588Z" level=info msg="StopPodSandbox for \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\" returns successfully" Jan 30 12:55:43.011245 containerd[1445]: time="2025-01-30T12:55:43.011214469Z" level=info msg="StopPodSandbox for \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\"" Jan 30 12:55:43.011321 containerd[1445]: time="2025-01-30T12:55:43.011301229Z" level=info msg="TearDown network for sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" successfully" Jan 30 12:55:43.011321 containerd[1445]: time="2025-01-30T12:55:43.011315149Z" level=info msg="StopPodSandbox for \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" returns successfully" Jan 30 12:55:43.011762 containerd[1445]: time="2025-01-30T12:55:43.011737429Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\"" Jan 30 12:55:43.011834 containerd[1445]: time="2025-01-30T12:55:43.011817669Z" level=info msg="TearDown network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" successfully" Jan 30 12:55:43.011834 containerd[1445]: time="2025-01-30T12:55:43.011830869Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" returns successfully" Jan 30 12:55:43.012210 kubelet[2613]: I0130 12:55:43.012179 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d" Jan 30 12:55:43.013084 containerd[1445]: time="2025-01-30T12:55:43.013050231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:5,}" Jan 30 12:55:43.013442 containerd[1445]: time="2025-01-30T12:55:43.013410751Z" level=info msg="StopPodSandbox for \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\"" Jan 30 12:55:43.015239 containerd[1445]: time="2025-01-30T12:55:43.013556351Z" level=info msg="Ensure that sandbox 2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d in task-service has been cleanup successfully" Jan 30 12:55:43.015239 containerd[1445]: time="2025-01-30T12:55:43.013737231Z" level=info msg="TearDown network for sandbox \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\" successfully" Jan 30 12:55:43.015239 containerd[1445]: time="2025-01-30T12:55:43.013750191Z" level=info msg="StopPodSandbox for \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\" returns successfully" Jan 30 12:55:43.018233 containerd[1445]: time="2025-01-30T12:55:43.018196996Z" level=info msg="StopPodSandbox for \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\"" Jan 30 12:55:43.018450 containerd[1445]: time="2025-01-30T12:55:43.018368477Z" level=info msg="TearDown network for sandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\" successfully" Jan 30 12:55:43.018450 containerd[1445]: time="2025-01-30T12:55:43.018384077Z" level=info msg="StopPodSandbox for \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\" returns successfully" Jan 30 12:55:43.019903 containerd[1445]: time="2025-01-30T12:55:43.019768358Z" level=info msg="StopPodSandbox for \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\"" Jan 30 12:55:43.019903 containerd[1445]: time="2025-01-30T12:55:43.019858718Z" level=info msg="TearDown network for sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\" successfully" Jan 30 12:55:43.019994 containerd[1445]: time="2025-01-30T12:55:43.019942758Z" level=info msg="StopPodSandbox for \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\" returns successfully" Jan 30 12:55:43.021554 containerd[1445]: time="2025-01-30T12:55:43.021508560Z" level=info msg="StopPodSandbox for \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\"" Jan 30 12:55:43.022851 kubelet[2613]: I0130 12:55:43.022820 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec" Jan 30 12:55:43.024782 containerd[1445]: time="2025-01-30T12:55:43.024741084Z" level=info msg="TearDown network for sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" successfully" Jan 30 12:55:43.024782 containerd[1445]: time="2025-01-30T12:55:43.024778684Z" level=info msg="StopPodSandbox for \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" returns successfully" Jan 30 12:55:43.025208 containerd[1445]: time="2025-01-30T12:55:43.025184244Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\"" Jan 30 12:55:43.025290 containerd[1445]: time="2025-01-30T12:55:43.025274604Z" level=info msg="TearDown network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" successfully" Jan 30 12:55:43.025290 containerd[1445]: time="2025-01-30T12:55:43.025288444Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" returns successfully" Jan 30 12:55:43.025712 containerd[1445]: time="2025-01-30T12:55:43.025630524Z" level=info msg="StopPodSandbox for \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\"" Jan 30 12:55:43.025747 kubelet[2613]: E0130 12:55:43.025482 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:43.025931 containerd[1445]: time="2025-01-30T12:55:43.025828805Z" level=info msg="Ensure that sandbox fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec in task-service has been cleanup successfully" Jan 30 12:55:43.027245 containerd[1445]: time="2025-01-30T12:55:43.027216766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:5,}" Jan 30 12:55:43.027529 containerd[1445]: time="2025-01-30T12:55:43.027496727Z" level=info msg="TearDown network for sandbox \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\" successfully" Jan 30 12:55:43.027554 containerd[1445]: time="2025-01-30T12:55:43.027527927Z" level=info msg="StopPodSandbox for \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\" returns successfully" Jan 30 12:55:43.028333 containerd[1445]: time="2025-01-30T12:55:43.028309167Z" level=info msg="StopPodSandbox for \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\"" Jan 30 12:55:43.028703 containerd[1445]: time="2025-01-30T12:55:43.028621488Z" level=info msg="TearDown network for sandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\" successfully" Jan 30 12:55:43.032134 containerd[1445]: time="2025-01-30T12:55:43.028640168Z" level=info msg="StopPodSandbox for \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\" returns successfully" Jan 30 12:55:43.037758 containerd[1445]: time="2025-01-30T12:55:43.037259497Z" level=info msg="StopPodSandbox for \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\"" Jan 30 12:55:43.037758 containerd[1445]: time="2025-01-30T12:55:43.037364977Z" level=info msg="TearDown network for sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\" successfully" Jan 30 12:55:43.037758 containerd[1445]: time="2025-01-30T12:55:43.037375377Z" level=info msg="StopPodSandbox for \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\" returns successfully" Jan 30 12:55:43.037451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4-shm.mount: Deactivated successfully. Jan 30 12:55:43.037548 systemd[1]: run-netns-cni\x2d812e9e1b\x2d19a0\x2d2d9d\x2df991\x2d4e1464e48d70.mount: Deactivated successfully. Jan 30 12:55:43.037594 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec-shm.mount: Deactivated successfully. Jan 30 12:55:43.037646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1466363974.mount: Deactivated successfully. Jan 30 12:55:43.042872 containerd[1445]: time="2025-01-30T12:55:43.042099943Z" level=info msg="StopPodSandbox for \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\"" Jan 30 12:55:43.042872 containerd[1445]: time="2025-01-30T12:55:43.042263943Z" level=info msg="TearDown network for sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" successfully" Jan 30 12:55:43.042872 containerd[1445]: time="2025-01-30T12:55:43.042275703Z" level=info msg="StopPodSandbox for \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" returns successfully" Jan 30 12:55:43.053190 containerd[1445]: time="2025-01-30T12:55:43.053039235Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\"" Jan 30 12:55:43.053374 containerd[1445]: time="2025-01-30T12:55:43.053340475Z" level=info msg="TearDown network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" successfully" Jan 30 12:55:43.053374 containerd[1445]: time="2025-01-30T12:55:43.053361795Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" returns successfully" Jan 30 12:55:43.054005 kubelet[2613]: E0130 12:55:43.053657 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:43.058619 kubelet[2613]: I0130 12:55:43.058576 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4" Jan 30 12:55:43.059631 containerd[1445]: time="2025-01-30T12:55:43.056776999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:5,}" Jan 30 12:55:43.063067 containerd[1445]: time="2025-01-30T12:55:43.062757925Z" level=info msg="StopPodSandbox for \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\"" Jan 30 12:55:43.063067 containerd[1445]: time="2025-01-30T12:55:43.062952406Z" level=info msg="Ensure that sandbox 4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4 in task-service has been cleanup successfully" Jan 30 12:55:43.063734 containerd[1445]: time="2025-01-30T12:55:43.063707726Z" level=info msg="TearDown network for sandbox \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\" successfully" Jan 30 12:55:43.063830 containerd[1445]: time="2025-01-30T12:55:43.063800166Z" level=info msg="StopPodSandbox for \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\" returns successfully" Jan 30 12:55:43.064555 containerd[1445]: time="2025-01-30T12:55:43.064512007Z" level=info msg="StopPodSandbox for \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\"" Jan 30 12:55:43.103387 containerd[1445]: time="2025-01-30T12:55:43.102439489Z" level=info msg="TearDown network for sandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\" successfully" Jan 30 12:55:43.103387 containerd[1445]: time="2025-01-30T12:55:43.102478489Z" level=info msg="StopPodSandbox for \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\" returns successfully" Jan 30 12:55:43.103387 containerd[1445]: time="2025-01-30T12:55:43.102916530Z" level=info msg="StopPodSandbox for \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\"" Jan 30 12:55:43.103387 containerd[1445]: time="2025-01-30T12:55:43.103017850Z" level=info msg="TearDown network for sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\" successfully" Jan 30 12:55:43.103387 containerd[1445]: time="2025-01-30T12:55:43.103138650Z" level=info msg="StopPodSandbox for \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\" returns successfully" Jan 30 12:55:43.103650 containerd[1445]: time="2025-01-30T12:55:43.103501650Z" level=info msg="StopPodSandbox for \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\"" Jan 30 12:55:43.103650 containerd[1445]: time="2025-01-30T12:55:43.103580210Z" level=info msg="TearDown network for sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" successfully" Jan 30 12:55:43.103650 containerd[1445]: time="2025-01-30T12:55:43.103591130Z" level=info msg="StopPodSandbox for \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" returns successfully" Jan 30 12:55:43.104252 containerd[1445]: time="2025-01-30T12:55:43.103829731Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\"" Jan 30 12:55:43.104252 containerd[1445]: time="2025-01-30T12:55:43.104053171Z" level=info msg="TearDown network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" successfully" Jan 30 12:55:43.104252 containerd[1445]: time="2025-01-30T12:55:43.104084451Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" returns successfully" Jan 30 12:55:43.106253 containerd[1445]: time="2025-01-30T12:55:43.106081493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:5,}" Jan 30 12:55:43.792426 systemd-networkd[1385]: cali67068342224: Link UP Jan 30 12:55:43.793060 systemd-networkd[1385]: cali67068342224: Gained carrier Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.162 [INFO][4609] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.192 [INFO][4609] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0 coredns-7db6d8ff4d- kube-system 0033dce8-2617-4a93-af94-68a801729315 822 0 2025-01-30 12:55:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-nsdsm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67068342224 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nsdsm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nsdsm-" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.192 [INFO][4609] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nsdsm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.691 [INFO][4667] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" HandleID="k8s-pod-network.89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" Workload="localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.730 [INFO][4667] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" HandleID="k8s-pod-network.89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" Workload="localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000376fa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-nsdsm", "timestamp":"2025-01-30 12:55:43.691865938 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.730 [INFO][4667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.730 [INFO][4667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.735 [INFO][4667] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.740 [INFO][4667] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" host="localhost" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.756 [INFO][4667] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.761 [INFO][4667] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.763 [INFO][4667] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.766 [INFO][4667] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.766 [INFO][4667] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" host="localhost" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.767 [INFO][4667] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89 Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.772 [INFO][4667] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" host="localhost" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.778 [INFO][4667] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" host="localhost" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.778 [INFO][4667] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" host="localhost" Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.778 [INFO][4667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:55:43.810754 containerd[1445]: 2025-01-30 12:55:43.778 [INFO][4667] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" HandleID="k8s-pod-network.89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" Workload="localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0" Jan 30 12:55:43.811472 containerd[1445]: 2025-01-30 12:55:43.781 [INFO][4609] cni-plugin/k8s.go 386: Populated endpoint ContainerID="89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nsdsm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0033dce8-2617-4a93-af94-68a801729315", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-nsdsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67068342224", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:43.811472 containerd[1445]: 2025-01-30 12:55:43.781 [INFO][4609] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nsdsm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0" Jan 30 12:55:43.811472 containerd[1445]: 2025-01-30 12:55:43.781 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67068342224 ContainerID="89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nsdsm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0" Jan 30 12:55:43.811472 containerd[1445]: 2025-01-30 12:55:43.791 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nsdsm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0" Jan 30 12:55:43.811472 containerd[1445]: 2025-01-30 12:55:43.791 [INFO][4609] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nsdsm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0033dce8-2617-4a93-af94-68a801729315", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89", Pod:"coredns-7db6d8ff4d-nsdsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67068342224", MAC:"4e:aa:46:6f:cf:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:43.811472 containerd[1445]: 2025-01-30 12:55:43.807 [INFO][4609] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nsdsm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nsdsm-eth0" Jan 30 12:55:43.832950 systemd-networkd[1385]: cali37c572e15f5: Link UP Jan 30 12:55:43.833205 systemd-networkd[1385]: cali37c572e15f5: Gained carrier Jan 30 12:55:43.846055 containerd[1445]: time="2025-01-30T12:55:43.844870746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:55:43.846055 containerd[1445]: time="2025-01-30T12:55:43.845712467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:55:43.846055 containerd[1445]: time="2025-01-30T12:55:43.845727987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:43.846996 containerd[1445]: time="2025-01-30T12:55:43.846928708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.046 [INFO][4571] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.187 [INFO][4571] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0 calico-kube-controllers-59db857b5c- calico-system 97079026-7789-44af-adb9-0f82fffc0d08 818 0 2025-01-30 12:55:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59db857b5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-59db857b5c-zqhl4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali37c572e15f5 [] []}} ContainerID="abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" Namespace="calico-system" Pod="calico-kube-controllers-59db857b5c-zqhl4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.187 [INFO][4571] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" Namespace="calico-system" Pod="calico-kube-controllers-59db857b5c-zqhl4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.696 [INFO][4665] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" HandleID="k8s-pod-network.abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" Workload="localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.730 [INFO][4665] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" HandleID="k8s-pod-network.abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" Workload="localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028acd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-59db857b5c-zqhl4", "timestamp":"2025-01-30 12:55:43.696879663 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.731 [INFO][4665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.778 [INFO][4665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.778 [INFO][4665] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.781 [INFO][4665] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" host="localhost" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.789 [INFO][4665] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.796 [INFO][4665] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.798 [INFO][4665] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.801 [INFO][4665] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.801 [INFO][4665] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" host="localhost" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.805 [INFO][4665] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.812 [INFO][4665] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" host="localhost" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.823 [INFO][4665] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" host="localhost" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.824 [INFO][4665] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" host="localhost" Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.824 [INFO][4665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:55:43.847706 containerd[1445]: 2025-01-30 12:55:43.824 [INFO][4665] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" HandleID="k8s-pod-network.abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" Workload="localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0" Jan 30 12:55:43.848329 containerd[1445]: 2025-01-30 12:55:43.829 [INFO][4571] cni-plugin/k8s.go 386: Populated endpoint ContainerID="abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" Namespace="calico-system" Pod="calico-kube-controllers-59db857b5c-zqhl4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0", GenerateName:"calico-kube-controllers-59db857b5c-", Namespace:"calico-system", SelfLink:"", UID:"97079026-7789-44af-adb9-0f82fffc0d08", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59db857b5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-59db857b5c-zqhl4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37c572e15f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:43.848329 containerd[1445]: 2025-01-30 12:55:43.829 [INFO][4571] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" Namespace="calico-system" Pod="calico-kube-controllers-59db857b5c-zqhl4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0" Jan 30 12:55:43.848329 containerd[1445]: 2025-01-30 12:55:43.829 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37c572e15f5 ContainerID="abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" Namespace="calico-system" Pod="calico-kube-controllers-59db857b5c-zqhl4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0" Jan 30 12:55:43.848329 containerd[1445]: 2025-01-30 12:55:43.831 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" Namespace="calico-system" Pod="calico-kube-controllers-59db857b5c-zqhl4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0" Jan 30 12:55:43.848329 containerd[1445]: 2025-01-30 12:55:43.832 [INFO][4571] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" Namespace="calico-system" Pod="calico-kube-controllers-59db857b5c-zqhl4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0", GenerateName:"calico-kube-controllers-59db857b5c-", Namespace:"calico-system", SelfLink:"", UID:"97079026-7789-44af-adb9-0f82fffc0d08", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59db857b5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e", Pod:"calico-kube-controllers-59db857b5c-zqhl4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali37c572e15f5", MAC:"3a:b9:a3:ef:26:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:43.848329 containerd[1445]: 2025-01-30 12:55:43.845 [INFO][4571] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e" Namespace="calico-system" Pod="calico-kube-controllers-59db857b5c-zqhl4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59db857b5c--zqhl4-eth0" Jan 30 12:55:43.869118 systemd[1]: Started cri-containerd-89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89.scope - libcontainer container 89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89. Jan 30 12:55:43.876207 systemd-networkd[1385]: cali2005a1d11ea: Link UP Jan 30 12:55:43.876971 containerd[1445]: time="2025-01-30T12:55:43.876739021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:55:43.877077 systemd-networkd[1385]: cali2005a1d11ea: Gained carrier Jan 30 12:55:43.877641 containerd[1445]: time="2025-01-30T12:55:43.877323262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:55:43.877641 containerd[1445]: time="2025-01-30T12:55:43.877343662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:43.877641 containerd[1445]: time="2025-01-30T12:55:43.877420422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.163 [INFO][4621] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.192 [INFO][4621] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--2582n-eth0 coredns-7db6d8ff4d- kube-system f805eb31-5209-4788-beca-600c9f139d8b 819 0 2025-01-30 12:55:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-2582n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2005a1d11ea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2582n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2582n-" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.192 [INFO][4621] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2582n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2582n-eth0" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.691 [INFO][4669] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" HandleID="k8s-pod-network.50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" Workload="localhost-k8s-coredns--7db6d8ff4d--2582n-eth0" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.731 [INFO][4669] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" HandleID="k8s-pod-network.50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" Workload="localhost-k8s-coredns--7db6d8ff4d--2582n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011dd40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-2582n", "timestamp":"2025-01-30 12:55:43.691570217 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.732 [INFO][4669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.824 [INFO][4669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.824 [INFO][4669] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.827 [INFO][4669] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" host="localhost" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.834 [INFO][4669] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.842 [INFO][4669] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.847 [INFO][4669] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.850 [INFO][4669] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.850 [INFO][4669] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" host="localhost" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.854 [INFO][4669] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632 Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.861 [INFO][4669] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" host="localhost" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.868 [INFO][4669] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" host="localhost" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.868 [INFO][4669] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" host="localhost" Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.868 [INFO][4669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:55:43.892931 containerd[1445]: 2025-01-30 12:55:43.868 [INFO][4669] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" HandleID="k8s-pod-network.50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" Workload="localhost-k8s-coredns--7db6d8ff4d--2582n-eth0" Jan 30 12:55:43.893451 containerd[1445]: 2025-01-30 12:55:43.873 [INFO][4621] cni-plugin/k8s.go 386: Populated endpoint ContainerID="50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2582n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2582n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2582n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f805eb31-5209-4788-beca-600c9f139d8b", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-2582n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2005a1d11ea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:43.893451 containerd[1445]: 2025-01-30 12:55:43.873 [INFO][4621] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2582n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2582n-eth0" Jan 30 12:55:43.893451 containerd[1445]: 2025-01-30 12:55:43.873 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2005a1d11ea ContainerID="50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2582n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2582n-eth0" Jan 30 12:55:43.893451 containerd[1445]: 2025-01-30 12:55:43.877 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2582n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2582n-eth0" Jan 30 12:55:43.893451 containerd[1445]: 2025-01-30 12:55:43.878 [INFO][4621] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2582n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2582n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2582n-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f805eb31-5209-4788-beca-600c9f139d8b", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632", Pod:"coredns-7db6d8ff4d-2582n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2005a1d11ea", MAC:"4e:c5:9e:43:cb:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:43.893451 containerd[1445]: 2025-01-30 12:55:43.889 [INFO][4621] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2582n" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2582n-eth0" Jan 30 12:55:43.901133 systemd[1]: Started cri-containerd-abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e.scope - libcontainer container abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e. Jan 30 12:55:43.903739 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:55:43.921076 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:55:43.926963 containerd[1445]: time="2025-01-30T12:55:43.926403316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:55:43.926963 containerd[1445]: time="2025-01-30T12:55:43.926939916Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:55:43.926963 containerd[1445]: time="2025-01-30T12:55:43.926953596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:43.927145 containerd[1445]: time="2025-01-30T12:55:43.927050876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:43.929723 containerd[1445]: time="2025-01-30T12:55:43.929686079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nsdsm,Uid:0033dce8-2617-4a93-af94-68a801729315,Namespace:kube-system,Attempt:5,} returns sandbox id \"89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89\"" Jan 30 12:55:43.930915 kubelet[2613]: E0130 12:55:43.930847 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:43.933280 containerd[1445]: time="2025-01-30T12:55:43.933232523Z" level=info msg="CreateContainer within sandbox \"89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:55:43.951170 containerd[1445]: time="2025-01-30T12:55:43.950798262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59db857b5c-zqhl4,Uid:97079026-7789-44af-adb9-0f82fffc0d08,Namespace:calico-system,Attempt:5,} returns sandbox id \"abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e\"" Jan 30 12:55:43.953078 systemd[1]: Started cri-containerd-50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632.scope - libcontainer container 50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632. Jan 30 12:55:43.954453 containerd[1445]: time="2025-01-30T12:55:43.953015745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 12:55:43.964351 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:55:43.981604 containerd[1445]: time="2025-01-30T12:55:43.981568536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2582n,Uid:f805eb31-5209-4788-beca-600c9f139d8b,Namespace:kube-system,Attempt:5,} returns sandbox id \"50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632\"" Jan 30 12:55:43.983796 kubelet[2613]: E0130 12:55:43.983767 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:43.994341 containerd[1445]: time="2025-01-30T12:55:43.994291190Z" level=info msg="CreateContainer within sandbox \"50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 12:55:43.996330 systemd-networkd[1385]: cali43ba99c5ac5: Link UP Jan 30 12:55:43.997278 systemd-networkd[1385]: cali43ba99c5ac5: Gained carrier Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.136 [INFO][4583] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.186 [INFO][4583] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0 calico-apiserver-ccfbbf7fd- calico-apiserver dad53f38-004e-44aa-900a-c314d9a9e9de 821 0 2025-01-30 12:55:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ccfbbf7fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ccfbbf7fd-c9cpd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali43ba99c5ac5 [] []}} ContainerID="a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-c9cpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.187 [INFO][4583] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-c9cpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.694 [INFO][4666] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" HandleID="k8s-pod-network.a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" Workload="localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.733 [INFO][4666] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" HandleID="k8s-pod-network.a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" Workload="localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000391b30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ccfbbf7fd-c9cpd", "timestamp":"2025-01-30 12:55:43.694675741 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.734 [INFO][4666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.868 [INFO][4666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.869 [INFO][4666] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.871 [INFO][4666] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" host="localhost" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.890 [INFO][4666] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.895 [INFO][4666] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.898 [INFO][4666] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.901 [INFO][4666] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.901 [INFO][4666] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" host="localhost" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.904 [INFO][4666] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701 Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.948 [INFO][4666] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" host="localhost" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.986 [INFO][4666] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" host="localhost" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.986 [INFO][4666] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" host="localhost" Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.986 [INFO][4666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:55:44.015996 containerd[1445]: 2025-01-30 12:55:43.986 [INFO][4666] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" HandleID="k8s-pod-network.a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" Workload="localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0" Jan 30 12:55:44.016601 containerd[1445]: 2025-01-30 12:55:43.990 [INFO][4583] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-c9cpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0", GenerateName:"calico-apiserver-ccfbbf7fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"dad53f38-004e-44aa-900a-c314d9a9e9de", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccfbbf7fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ccfbbf7fd-c9cpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43ba99c5ac5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:44.016601 containerd[1445]: 2025-01-30 12:55:43.991 [INFO][4583] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-c9cpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0" Jan 30 12:55:44.016601 containerd[1445]: 2025-01-30 12:55:43.991 [INFO][4583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43ba99c5ac5 ContainerID="a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-c9cpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0" Jan 30 12:55:44.016601 containerd[1445]: 2025-01-30 12:55:43.997 [INFO][4583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-c9cpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0" Jan 30 12:55:44.016601 containerd[1445]: 2025-01-30 12:55:43.998 [INFO][4583] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-c9cpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0", GenerateName:"calico-apiserver-ccfbbf7fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"dad53f38-004e-44aa-900a-c314d9a9e9de", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccfbbf7fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701", Pod:"calico-apiserver-ccfbbf7fd-c9cpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43ba99c5ac5", MAC:"9a:60:41:d2:42:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:44.016601 containerd[1445]: 2025-01-30 12:55:44.013 [INFO][4583] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-c9cpd" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--c9cpd-eth0" Jan 30 12:55:44.022913 containerd[1445]: time="2025-01-30T12:55:44.022354380Z" level=info msg="CreateContainer within sandbox \"89062199557b061f19a2ac663e7214435b81f01308fda7332dde41eb0a4a1f89\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"828bc9c44c1b375885e3a3658b84cafb8f70358c0d867b6b4fa340ea383b94fa\"" Jan 30 12:55:44.023354 containerd[1445]: time="2025-01-30T12:55:44.023323021Z" level=info msg="StartContainer for \"828bc9c44c1b375885e3a3658b84cafb8f70358c0d867b6b4fa340ea383b94fa\"" Jan 30 12:55:44.037502 systemd[1]: run-netns-cni\x2d3aec44be\x2d705b\x2d8d9f\x2da82a\x2df7df0c35f9cd.mount: Deactivated successfully. Jan 30 12:55:44.051922 containerd[1445]: time="2025-01-30T12:55:44.051804490Z" level=info msg="CreateContainer within sandbox \"50e9b87ca453e31c93235c6715302223d5db1ace7e6bb310c90c1656fe4d1632\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83fb7557c43f832d8bc283fd7e1980a8e6b7d43a1ee805eee367ef28751890ce\"" Jan 30 12:55:44.061101 systemd-networkd[1385]: cali343167de942: Link UP Jan 30 12:55:44.061364 systemd-networkd[1385]: cali343167de942: Gained carrier Jan 30 12:55:44.062785 containerd[1445]: time="2025-01-30T12:55:44.062624181Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:55:44.062785 containerd[1445]: time="2025-01-30T12:55:44.062698101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:55:44.062785 containerd[1445]: time="2025-01-30T12:55:44.062714101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:44.064088 containerd[1445]: time="2025-01-30T12:55:44.062815222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:44.067632 containerd[1445]: time="2025-01-30T12:55:44.067547346Z" level=info msg="StartContainer for \"83fb7557c43f832d8bc283fd7e1980a8e6b7d43a1ee805eee367ef28751890ce\"" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:43.337 [INFO][4645] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:43.386 [INFO][4645] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0 calico-apiserver-ccfbbf7fd- calico-apiserver cf956daa-f7be-4272-9ce2-d1c78864d112 812 0 2025-01-30 12:55:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:ccfbbf7fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-ccfbbf7fd-dg5hc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali343167de942 [] []}} ContainerID="d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-dg5hc" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:43.386 [INFO][4645] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-dg5hc" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:43.691 [INFO][4707] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" HandleID="k8s-pod-network.d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" Workload="localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:43.736 [INFO][4707] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" HandleID="k8s-pod-network.d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" Workload="localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a1d30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-ccfbbf7fd-dg5hc", "timestamp":"2025-01-30 12:55:43.691569657 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:43.736 [INFO][4707] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:43.986 [INFO][4707] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:43.986 [INFO][4707] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:43.990 [INFO][4707] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" host="localhost" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:44.000 [INFO][4707] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:44.010 [INFO][4707] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:44.019 [INFO][4707] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:44.024 [INFO][4707] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:44.024 [INFO][4707] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" host="localhost" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:44.026 [INFO][4707] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91 Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:44.031 [INFO][4707] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" host="localhost" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:44.047 [INFO][4707] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" host="localhost" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:44.047 [INFO][4707] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" host="localhost" Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:44.047 [INFO][4707] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:55:44.086012 containerd[1445]: 2025-01-30 12:55:44.047 [INFO][4707] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" HandleID="k8s-pod-network.d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" Workload="localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0" Jan 30 12:55:44.087292 containerd[1445]: 2025-01-30 12:55:44.053 [INFO][4645] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-dg5hc" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0", GenerateName:"calico-apiserver-ccfbbf7fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf956daa-f7be-4272-9ce2-d1c78864d112", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccfbbf7fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-ccfbbf7fd-dg5hc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali343167de942", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:44.087292 containerd[1445]: 2025-01-30 12:55:44.053 [INFO][4645] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-dg5hc" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0" Jan 30 12:55:44.087292 containerd[1445]: 2025-01-30 12:55:44.053 [INFO][4645] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali343167de942 ContainerID="d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-dg5hc" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0" Jan 30 12:55:44.087292 containerd[1445]: 2025-01-30 12:55:44.060 [INFO][4645] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-dg5hc" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0" Jan 30 12:55:44.087292 containerd[1445]: 2025-01-30 12:55:44.061 [INFO][4645] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-dg5hc" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0", GenerateName:"calico-apiserver-ccfbbf7fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"cf956daa-f7be-4272-9ce2-d1c78864d112", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"ccfbbf7fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91", Pod:"calico-apiserver-ccfbbf7fd-dg5hc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali343167de942", MAC:"72:b4:ed:22:57:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:44.087292 containerd[1445]: 2025-01-30 12:55:44.077 [INFO][4645] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91" Namespace="calico-apiserver" Pod="calico-apiserver-ccfbbf7fd-dg5hc" WorkloadEndpoint="localhost-k8s-calico--apiserver--ccfbbf7fd--dg5hc-eth0" Jan 30 12:55:44.087083 systemd[1]: Started cri-containerd-828bc9c44c1b375885e3a3658b84cafb8f70358c0d867b6b4fa340ea383b94fa.scope - libcontainer container 828bc9c44c1b375885e3a3658b84cafb8f70358c0d867b6b4fa340ea383b94fa. Jan 30 12:55:44.099136 systemd[1]: Started cri-containerd-a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701.scope - libcontainer container a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701. Jan 30 12:55:44.101178 kubelet[2613]: E0130 12:55:44.100767 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:44.119372 systemd-networkd[1385]: cali2ef8a3e4d54: Link UP Jan 30 12:55:44.119704 systemd-networkd[1385]: cali2ef8a3e4d54: Gained carrier Jan 30 12:55:44.131517 containerd[1445]: time="2025-01-30T12:55:44.131359572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:55:44.131622 containerd[1445]: time="2025-01-30T12:55:44.131525052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:55:44.132802 containerd[1445]: time="2025-01-30T12:55:44.131578172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:44.132802 containerd[1445]: time="2025-01-30T12:55:44.132231413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:44.141842 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:43.157 [INFO][4591] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:43.194 [INFO][4591] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rx85w-eth0 csi-node-driver- calico-system c53dd490-f49e-4931-b31d-7e8897227295 673 0 2025-01-30 12:55:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rx85w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2ef8a3e4d54 [] []}} ContainerID="3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" Namespace="calico-system" Pod="csi-node-driver-rx85w" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx85w-" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:43.194 [INFO][4591] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" Namespace="calico-system" Pod="csi-node-driver-rx85w" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx85w-eth0" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:43.691 [INFO][4668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" HandleID="k8s-pod-network.3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" Workload="localhost-k8s-csi--node--driver--rx85w-eth0" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:43.735 [INFO][4668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" HandleID="k8s-pod-network.3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" Workload="localhost-k8s-csi--node--driver--rx85w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400068c930), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rx85w", "timestamp":"2025-01-30 12:55:43.691573217 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:43.736 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.047 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.047 [INFO][4668] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.052 [INFO][4668] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" host="localhost" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.062 [INFO][4668] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.077 [INFO][4668] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.081 [INFO][4668] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.086 [INFO][4668] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.086 [INFO][4668] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" host="localhost" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.092 [INFO][4668] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190 Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.097 [INFO][4668] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" host="localhost" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.105 [INFO][4668] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" host="localhost" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.105 [INFO][4668] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" host="localhost" Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.105 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 12:55:44.144604 containerd[1445]: 2025-01-30 12:55:44.105 [INFO][4668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" HandleID="k8s-pod-network.3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" Workload="localhost-k8s-csi--node--driver--rx85w-eth0" Jan 30 12:55:44.145157 containerd[1445]: 2025-01-30 12:55:44.114 [INFO][4591] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" Namespace="calico-system" Pod="csi-node-driver-rx85w" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx85w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rx85w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c53dd490-f49e-4931-b31d-7e8897227295", ResourceVersion:"673", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rx85w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ef8a3e4d54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:44.145157 containerd[1445]: 2025-01-30 12:55:44.114 [INFO][4591] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" Namespace="calico-system" Pod="csi-node-driver-rx85w" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx85w-eth0" Jan 30 12:55:44.145157 containerd[1445]: 2025-01-30 12:55:44.114 [INFO][4591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ef8a3e4d54 ContainerID="3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" Namespace="calico-system" Pod="csi-node-driver-rx85w" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx85w-eth0" Jan 30 12:55:44.145157 containerd[1445]: 2025-01-30 12:55:44.120 [INFO][4591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" Namespace="calico-system" Pod="csi-node-driver-rx85w" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx85w-eth0" Jan 30 12:55:44.145157 containerd[1445]: 2025-01-30 12:55:44.123 [INFO][4591] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" Namespace="calico-system" Pod="csi-node-driver-rx85w" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx85w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rx85w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c53dd490-f49e-4931-b31d-7e8897227295", ResourceVersion:"673", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 12, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190", Pod:"csi-node-driver-rx85w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2ef8a3e4d54", MAC:"12:3f:5f:7f:0d:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 12:55:44.145157 containerd[1445]: 2025-01-30 12:55:44.140 [INFO][4591] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190" Namespace="calico-system" Pod="csi-node-driver-rx85w" WorkloadEndpoint="localhost-k8s-csi--node--driver--rx85w-eth0" Jan 30 12:55:44.159676 containerd[1445]: time="2025-01-30T12:55:44.159586361Z" level=info msg="StartContainer for \"828bc9c44c1b375885e3a3658b84cafb8f70358c0d867b6b4fa340ea383b94fa\" returns successfully" Jan 30 12:55:44.170516 systemd[1]: Started cri-containerd-83fb7557c43f832d8bc283fd7e1980a8e6b7d43a1ee805eee367ef28751890ce.scope - libcontainer container 83fb7557c43f832d8bc283fd7e1980a8e6b7d43a1ee805eee367ef28751890ce. Jan 30 12:55:44.171737 systemd[1]: Started cri-containerd-d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91.scope - libcontainer container d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91. Jan 30 12:55:44.180231 containerd[1445]: time="2025-01-30T12:55:44.179453702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:55:44.182030 containerd[1445]: time="2025-01-30T12:55:44.181587384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:55:44.182030 containerd[1445]: time="2025-01-30T12:55:44.181997464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:44.182564 containerd[1445]: time="2025-01-30T12:55:44.182224025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:55:44.200249 containerd[1445]: time="2025-01-30T12:55:44.200206923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-c9cpd,Uid:dad53f38-004e-44aa-900a-c314d9a9e9de,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701\"" Jan 30 12:55:44.206258 systemd[1]: Started sshd@9-10.0.0.65:22-10.0.0.1:55876.service - OpenSSH per-connection server daemon (10.0.0.1:55876). Jan 30 12:55:44.222259 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:55:44.251133 systemd[1]: Started cri-containerd-3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190.scope - libcontainer container 3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190. Jan 30 12:55:44.278518 containerd[1445]: time="2025-01-30T12:55:44.278284964Z" level=info msg="StartContainer for \"83fb7557c43f832d8bc283fd7e1980a8e6b7d43a1ee805eee367ef28751890ce\" returns successfully" Jan 30 12:55:44.282788 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 55876 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:55:44.284155 containerd[1445]: time="2025-01-30T12:55:44.283424929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-ccfbbf7fd-dg5hc,Uid:cf956daa-f7be-4272-9ce2-d1c78864d112,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91\"" Jan 30 12:55:44.286944 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:44.298791 systemd-logind[1427]: New session 10 of user core. Jan 30 12:55:44.299634 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:55:44.303284 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 12:55:44.326244 containerd[1445]: time="2025-01-30T12:55:44.326078573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rx85w,Uid:c53dd490-f49e-4931-b31d-7e8897227295,Namespace:calico-system,Attempt:5,} returns sandbox id \"3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190\"" Jan 30 12:55:44.470174 sshd[5148]: Connection closed by 10.0.0.1 port 55876 Jan 30 12:55:44.472216 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:44.478651 systemd[1]: sshd@9-10.0.0.65:22-10.0.0.1:55876.service: Deactivated successfully. Jan 30 12:55:44.480383 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 12:55:44.480974 systemd-logind[1427]: Session 10 logged out. Waiting for processes to exit. Jan 30 12:55:44.483457 systemd[1]: Started sshd@10-10.0.0.65:22-10.0.0.1:55902.service - OpenSSH per-connection server daemon (10.0.0.1:55902). Jan 30 12:55:44.484448 systemd-logind[1427]: Removed session 10. Jan 30 12:55:44.528343 sshd[5172]: Accepted publickey for core from 10.0.0.1 port 55902 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:55:44.529771 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:44.534124 systemd-logind[1427]: New session 11 of user core. Jan 30 12:55:44.540089 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 12:55:44.833311 sshd[5174]: Connection closed by 10.0.0.1 port 55902 Jan 30 12:55:44.832389 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:44.842048 systemd[1]: sshd@10-10.0.0.65:22-10.0.0.1:55902.service: Deactivated successfully. Jan 30 12:55:44.846764 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 12:55:44.851623 systemd-logind[1427]: Session 11 logged out. Waiting for processes to exit. Jan 30 12:55:44.870290 systemd[1]: Started sshd@11-10.0.0.65:22-10.0.0.1:55924.service - OpenSSH per-connection server daemon (10.0.0.1:55924). Jan 30 12:55:44.871421 systemd-logind[1427]: Removed session 11. Jan 30 12:55:44.898930 kernel: bpftool[5316]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 12:55:44.945535 sshd[5291]: Accepted publickey for core from 10.0.0.1 port 55924 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:55:44.947137 sshd-session[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:44.954296 systemd-logind[1427]: New session 12 of user core. Jan 30 12:55:44.966663 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 12:55:44.995044 systemd-networkd[1385]: cali67068342224: Gained IPv6LL Jan 30 12:55:45.111912 systemd-networkd[1385]: vxlan.calico: Link UP Jan 30 12:55:45.111919 systemd-networkd[1385]: vxlan.calico: Gained carrier Jan 30 12:55:45.127865 kubelet[2613]: E0130 12:55:45.126880 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:45.153589 kubelet[2613]: E0130 12:55:45.153199 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:45.153589 kubelet[2613]: E0130 12:55:45.153306 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:45.168419 kubelet[2613]: I0130 12:55:45.167674 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2582n" podStartSLOduration=26.167639151 podStartE2EDuration="26.167639151s" podCreationTimestamp="2025-01-30 12:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:55:45.141145565 +0000 UTC m=+41.586087806" watchObservedRunningTime="2025-01-30 12:55:45.167639151 +0000 UTC m=+41.612581392" Jan 30 12:55:45.216526 kubelet[2613]: I0130 12:55:45.216346 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nsdsm" podStartSLOduration=26.216326198 podStartE2EDuration="26.216326198s" podCreationTimestamp="2025-01-30 12:55:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:55:45.215163676 +0000 UTC m=+41.660105917" watchObservedRunningTime="2025-01-30 12:55:45.216326198 +0000 UTC m=+41.661268439" Jan 30 12:55:45.232626 systemd[1]: run-containerd-runc-k8s.io-f5a04db30250bbdd00823b992421ab5e0c70879073205a61e6a8f237ed7edf3e-runc.8JXLca.mount: Deactivated successfully. Jan 30 12:55:45.251292 systemd-networkd[1385]: cali2ef8a3e4d54: Gained IPv6LL Jan 30 12:55:45.370073 sshd[5318]: Connection closed by 10.0.0.1 port 55924 Jan 30 12:55:45.373123 sshd-session[5291]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:45.377304 systemd[1]: sshd@11-10.0.0.65:22-10.0.0.1:55924.service: Deactivated successfully. Jan 30 12:55:45.382111 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 12:55:45.384569 systemd-logind[1427]: Session 12 logged out. Waiting for processes to exit. Jan 30 12:55:45.386572 systemd-logind[1427]: Removed session 12. Jan 30 12:55:45.443141 systemd-networkd[1385]: cali2005a1d11ea: Gained IPv6LL Jan 30 12:55:45.507578 systemd-networkd[1385]: cali343167de942: Gained IPv6LL Jan 30 12:55:45.572334 systemd-networkd[1385]: cali37c572e15f5: Gained IPv6LL Jan 30 12:55:45.670727 containerd[1445]: time="2025-01-30T12:55:45.670640117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 12:55:45.674924 containerd[1445]: time="2025-01-30T12:55:45.674767601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.720771335s" Jan 30 12:55:45.674924 containerd[1445]: time="2025-01-30T12:55:45.674850481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 12:55:45.676179 containerd[1445]: time="2025-01-30T12:55:45.675813162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 12:55:45.682393 containerd[1445]: time="2025-01-30T12:55:45.682316328Z" level=info msg="CreateContainer within sandbox \"abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 12:55:45.683425 containerd[1445]: time="2025-01-30T12:55:45.683386489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:45.684164 containerd[1445]: time="2025-01-30T12:55:45.684084410Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:45.686539 containerd[1445]: time="2025-01-30T12:55:45.684631251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:45.697867 containerd[1445]: time="2025-01-30T12:55:45.697825343Z" level=info msg="CreateContainer within sandbox \"abadff29856f81a992ea543a8f3d279ad1a848a8eecbc6849f2e7c01d6c60b6e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"14d5944d9ab3a0710b4ffb44db01f49a49025ba886ad046889a6659979e5a84c\"" Jan 30 12:55:45.699033 containerd[1445]: time="2025-01-30T12:55:45.698996464Z" level=info msg="StartContainer for \"14d5944d9ab3a0710b4ffb44db01f49a49025ba886ad046889a6659979e5a84c\"" Jan 30 12:55:45.699173 systemd-networkd[1385]: cali43ba99c5ac5: Gained IPv6LL Jan 30 12:55:45.743281 systemd[1]: Started cri-containerd-14d5944d9ab3a0710b4ffb44db01f49a49025ba886ad046889a6659979e5a84c.scope - libcontainer container 14d5944d9ab3a0710b4ffb44db01f49a49025ba886ad046889a6659979e5a84c. Jan 30 12:55:45.826079 containerd[1445]: time="2025-01-30T12:55:45.826025547Z" level=info msg="StartContainer for \"14d5944d9ab3a0710b4ffb44db01f49a49025ba886ad046889a6659979e5a84c\" returns successfully" Jan 30 12:55:46.157232 kubelet[2613]: E0130 12:55:46.157126 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:46.157941 kubelet[2613]: E0130 12:55:46.157919 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:46.181195 kubelet[2613]: I0130 12:55:46.181133 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59db857b5c-zqhl4" podStartSLOduration=17.458137663 podStartE2EDuration="19.18111568s" podCreationTimestamp="2025-01-30 12:55:27 +0000 UTC" firstStartedPulling="2025-01-30 12:55:43.952701945 +0000 UTC m=+40.397644186" lastFinishedPulling="2025-01-30 12:55:45.675679962 +0000 UTC m=+42.120622203" observedRunningTime="2025-01-30 12:55:46.17027007 +0000 UTC m=+42.615212311" watchObservedRunningTime="2025-01-30 12:55:46.18111568 +0000 UTC m=+42.626057921" Jan 30 12:55:46.979013 systemd-networkd[1385]: vxlan.calico: Gained IPv6LL Jan 30 12:55:47.160438 kubelet[2613]: E0130 12:55:47.160403 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:47.162397 kubelet[2613]: E0130 12:55:47.162234 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:47.446039 containerd[1445]: time="2025-01-30T12:55:47.445982121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:47.446947 containerd[1445]: time="2025-01-30T12:55:47.446900162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 12:55:47.448097 containerd[1445]: time="2025-01-30T12:55:47.448055803Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:47.450178 containerd[1445]: time="2025-01-30T12:55:47.450124245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:47.451069 containerd[1445]: time="2025-01-30T12:55:47.450980726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.775136004s" Jan 30 12:55:47.451069 containerd[1445]: time="2025-01-30T12:55:47.451022726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 12:55:47.452787 containerd[1445]: time="2025-01-30T12:55:47.452703047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 12:55:47.455613 containerd[1445]: time="2025-01-30T12:55:47.455433009Z" level=info msg="CreateContainer within sandbox \"a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 12:55:47.550759 containerd[1445]: time="2025-01-30T12:55:47.550693210Z" level=info msg="CreateContainer within sandbox \"a817cb594aa44eb648df83719fe453eabe432bdd146eef55558da81b495f5701\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8e6bd5a2a82e76f3c6e7533ebaa47dece246a18731a94926aee9c557e524aa8a\"" Jan 30 12:55:47.552284 containerd[1445]: time="2025-01-30T12:55:47.552230492Z" level=info msg="StartContainer for \"8e6bd5a2a82e76f3c6e7533ebaa47dece246a18731a94926aee9c557e524aa8a\"" Jan 30 12:55:47.607155 systemd[1]: Started cri-containerd-8e6bd5a2a82e76f3c6e7533ebaa47dece246a18731a94926aee9c557e524aa8a.scope - libcontainer container 8e6bd5a2a82e76f3c6e7533ebaa47dece246a18731a94926aee9c557e524aa8a. Jan 30 12:55:47.745303 containerd[1445]: time="2025-01-30T12:55:47.745189976Z" level=info msg="StartContainer for \"8e6bd5a2a82e76f3c6e7533ebaa47dece246a18731a94926aee9c557e524aa8a\" returns successfully" Jan 30 12:55:47.757563 containerd[1445]: time="2025-01-30T12:55:47.757509666Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:47.758283 containerd[1445]: time="2025-01-30T12:55:47.758232347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 12:55:47.762564 containerd[1445]: time="2025-01-30T12:55:47.762430070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 309.688543ms" Jan 30 12:55:47.762564 containerd[1445]: time="2025-01-30T12:55:47.762467630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 12:55:47.763575 containerd[1445]: time="2025-01-30T12:55:47.763546391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 12:55:47.766600 containerd[1445]: time="2025-01-30T12:55:47.766546154Z" level=info msg="CreateContainer within sandbox \"d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 12:55:47.780142 containerd[1445]: time="2025-01-30T12:55:47.780051205Z" level=info msg="CreateContainer within sandbox \"d70bd933aef829da2993b7185c19140b7d9d9a37742a958b42bb9fa87fcccb91\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"119f6c705a46767367026e141ba65408d90b1f817240e7be9d05dd81036be8d1\"" Jan 30 12:55:47.781973 containerd[1445]: time="2025-01-30T12:55:47.780864846Z" level=info msg="StartContainer for \"119f6c705a46767367026e141ba65408d90b1f817240e7be9d05dd81036be8d1\"" Jan 30 12:55:47.815156 systemd[1]: Started cri-containerd-119f6c705a46767367026e141ba65408d90b1f817240e7be9d05dd81036be8d1.scope - libcontainer container 119f6c705a46767367026e141ba65408d90b1f817240e7be9d05dd81036be8d1. Jan 30 12:55:47.864284 containerd[1445]: time="2025-01-30T12:55:47.864235397Z" level=info msg="StartContainer for \"119f6c705a46767367026e141ba65408d90b1f817240e7be9d05dd81036be8d1\" returns successfully" Jan 30 12:55:48.175614 kubelet[2613]: E0130 12:55:48.174618 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:55:48.182513 kubelet[2613]: I0130 12:55:48.182432 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-dg5hc" podStartSLOduration=18.704813759 podStartE2EDuration="22.182416818s" podCreationTimestamp="2025-01-30 12:55:26 +0000 UTC" firstStartedPulling="2025-01-30 12:55:44.285758012 +0000 UTC m=+40.730700253" lastFinishedPulling="2025-01-30 12:55:47.763361071 +0000 UTC m=+44.208303312" observedRunningTime="2025-01-30 12:55:48.182159778 +0000 UTC m=+44.627102019" watchObservedRunningTime="2025-01-30 12:55:48.182416818 +0000 UTC m=+44.627359059" Jan 30 12:55:48.203259 kubelet[2613]: I0130 12:55:48.203190 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-ccfbbf7fd-c9cpd" podStartSLOduration=18.957356197 podStartE2EDuration="22.203173114s" podCreationTimestamp="2025-01-30 12:55:26 +0000 UTC" firstStartedPulling="2025-01-30 12:55:44.206041929 +0000 UTC m=+40.650984170" lastFinishedPulling="2025-01-30 12:55:47.451858886 +0000 UTC m=+43.896801087" observedRunningTime="2025-01-30 12:55:48.203067674 +0000 UTC m=+44.648009915" watchObservedRunningTime="2025-01-30 12:55:48.203173114 +0000 UTC m=+44.648115355" Jan 30 12:55:48.869021 containerd[1445]: time="2025-01-30T12:55:48.868956925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 12:55:48.869856 containerd[1445]: time="2025-01-30T12:55:48.869020445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:48.870637 containerd[1445]: time="2025-01-30T12:55:48.870487886Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:48.877634 containerd[1445]: time="2025-01-30T12:55:48.877589652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:48.879122 containerd[1445]: time="2025-01-30T12:55:48.878976693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.115392982s" Jan 30 12:55:48.879122 containerd[1445]: time="2025-01-30T12:55:48.879014133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 12:55:48.882802 containerd[1445]: time="2025-01-30T12:55:48.882764016Z" level=info msg="CreateContainer within sandbox \"3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 12:55:48.898366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3688538116.mount: Deactivated successfully. Jan 30 12:55:48.937810 containerd[1445]: time="2025-01-30T12:55:48.937683220Z" level=info msg="CreateContainer within sandbox \"3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fa72d9aa2d467ed627b5c20f091b36779916c5dc54d7ea61e2f7cb5a99076bf2\"" Jan 30 12:55:48.939278 containerd[1445]: time="2025-01-30T12:55:48.938547660Z" level=info msg="StartContainer for \"fa72d9aa2d467ed627b5c20f091b36779916c5dc54d7ea61e2f7cb5a99076bf2\"" Jan 30 12:55:48.972098 systemd[1]: Started cri-containerd-fa72d9aa2d467ed627b5c20f091b36779916c5dc54d7ea61e2f7cb5a99076bf2.scope - libcontainer container fa72d9aa2d467ed627b5c20f091b36779916c5dc54d7ea61e2f7cb5a99076bf2. Jan 30 12:55:49.174935 containerd[1445]: time="2025-01-30T12:55:49.174850120Z" level=info msg="StartContainer for \"fa72d9aa2d467ed627b5c20f091b36779916c5dc54d7ea61e2f7cb5a99076bf2\" returns successfully" Jan 30 12:55:49.177407 containerd[1445]: time="2025-01-30T12:55:49.176815761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 12:55:49.184426 kubelet[2613]: I0130 12:55:49.184227 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:55:49.185827 kubelet[2613]: I0130 12:55:49.185053 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:55:50.385231 systemd[1]: Started sshd@12-10.0.0.65:22-10.0.0.1:55962.service - OpenSSH per-connection server daemon (10.0.0.1:55962). Jan 30 12:55:50.484788 sshd[5634]: Accepted publickey for core from 10.0.0.1 port 55962 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:55:50.486577 sshd-session[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:50.494266 systemd-logind[1427]: New session 13 of user core. Jan 30 12:55:50.506520 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 12:55:50.567869 containerd[1445]: time="2025-01-30T12:55:50.567818574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.572573 containerd[1445]: time="2025-01-30T12:55:50.572473297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 12:55:50.574177 containerd[1445]: time="2025-01-30T12:55:50.573425898Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.579227 containerd[1445]: time="2025-01-30T12:55:50.579152862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:55:50.580369 containerd[1445]: time="2025-01-30T12:55:50.579817703Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.402965102s" Jan 30 12:55:50.580369 containerd[1445]: time="2025-01-30T12:55:50.579851463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 12:55:50.586971 containerd[1445]: time="2025-01-30T12:55:50.586929908Z" level=info msg="CreateContainer within sandbox \"3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 12:55:50.664352 containerd[1445]: time="2025-01-30T12:55:50.664149362Z" level=info msg="CreateContainer within sandbox \"3aa2d100cfe919d2d1f192e4a2d86c92f5c62bf4874e35ddd7790b145a73f190\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8664c1bdf002c942d03ccee3403a3c5fa9c4752d1afe4baf4447e62abf625534\"" Jan 30 12:55:50.666021 containerd[1445]: time="2025-01-30T12:55:50.665973763Z" level=info msg="StartContainer for \"8664c1bdf002c942d03ccee3403a3c5fa9c4752d1afe4baf4447e62abf625534\"" Jan 30 12:55:50.696876 systemd[1]: run-containerd-runc-k8s.io-8664c1bdf002c942d03ccee3403a3c5fa9c4752d1afe4baf4447e62abf625534-runc.uYoSyR.mount: Deactivated successfully. Jan 30 12:55:50.711721 systemd[1]: Started cri-containerd-8664c1bdf002c942d03ccee3403a3c5fa9c4752d1afe4baf4447e62abf625534.scope - libcontainer container 8664c1bdf002c942d03ccee3403a3c5fa9c4752d1afe4baf4447e62abf625534. Jan 30 12:55:50.754104 containerd[1445]: time="2025-01-30T12:55:50.754000745Z" level=info msg="StartContainer for \"8664c1bdf002c942d03ccee3403a3c5fa9c4752d1afe4baf4447e62abf625534\" returns successfully" Jan 30 12:55:50.780385 sshd[5648]: Connection closed by 10.0.0.1 port 55962 Jan 30 12:55:50.781041 sshd-session[5634]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:50.784595 systemd[1]: sshd@12-10.0.0.65:22-10.0.0.1:55962.service: Deactivated successfully. Jan 30 12:55:50.786713 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 12:55:50.787499 systemd-logind[1427]: Session 13 logged out. Waiting for processes to exit. Jan 30 12:55:50.788784 systemd-logind[1427]: Removed session 13. Jan 30 12:55:51.210788 kubelet[2613]: I0130 12:55:51.210619 2613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rx85w" podStartSLOduration=17.96019577 podStartE2EDuration="24.210604655s" podCreationTimestamp="2025-01-30 12:55:27 +0000 UTC" firstStartedPulling="2025-01-30 12:55:44.331777739 +0000 UTC m=+40.776719940" lastFinishedPulling="2025-01-30 12:55:50.582186584 +0000 UTC m=+47.027128825" observedRunningTime="2025-01-30 12:55:51.210214455 +0000 UTC m=+47.655156696" watchObservedRunningTime="2025-01-30 12:55:51.210604655 +0000 UTC m=+47.655546896" Jan 30 12:55:51.760842 kubelet[2613]: I0130 12:55:51.760785 2613 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 12:55:51.769422 kubelet[2613]: I0130 12:55:51.769337 2613 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 12:55:55.792645 systemd[1]: Started sshd@13-10.0.0.65:22-10.0.0.1:43078.service - OpenSSH per-connection server daemon (10.0.0.1:43078). Jan 30 12:55:55.863477 sshd[5707]: Accepted publickey for core from 10.0.0.1 port 43078 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:55:55.865379 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:55.871383 systemd-logind[1427]: New session 14 of user core. Jan 30 12:55:55.882199 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 12:55:56.038463 sshd[5709]: Connection closed by 10.0.0.1 port 43078 Jan 30 12:55:56.039264 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:56.051690 systemd[1]: sshd@13-10.0.0.65:22-10.0.0.1:43078.service: Deactivated successfully. Jan 30 12:55:56.053993 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 12:55:56.055624 systemd-logind[1427]: Session 14 logged out. Waiting for processes to exit. Jan 30 12:55:56.068446 systemd[1]: Started sshd@14-10.0.0.65:22-10.0.0.1:43092.service - OpenSSH per-connection server daemon (10.0.0.1:43092). Jan 30 12:55:56.069695 systemd-logind[1427]: Removed session 14. Jan 30 12:55:56.119274 sshd[5722]: Accepted publickey for core from 10.0.0.1 port 43092 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:55:56.120851 sshd-session[5722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:56.126520 systemd-logind[1427]: New session 15 of user core. Jan 30 12:55:56.138086 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 12:55:56.404789 sshd[5724]: Connection closed by 10.0.0.1 port 43092 Jan 30 12:55:56.406005 sshd-session[5722]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:56.417402 systemd[1]: sshd@14-10.0.0.65:22-10.0.0.1:43092.service: Deactivated successfully. Jan 30 12:55:56.419765 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 12:55:56.421141 systemd-logind[1427]: Session 15 logged out. Waiting for processes to exit. Jan 30 12:55:56.429689 systemd[1]: Started sshd@15-10.0.0.65:22-10.0.0.1:43102.service - OpenSSH per-connection server daemon (10.0.0.1:43102). Jan 30 12:55:56.430850 systemd-logind[1427]: Removed session 15. Jan 30 12:55:56.487253 sshd[5734]: Accepted publickey for core from 10.0.0.1 port 43102 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:55:56.488881 sshd-session[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:56.493261 systemd-logind[1427]: New session 16 of user core. Jan 30 12:55:56.504098 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 12:55:58.062577 sshd[5736]: Connection closed by 10.0.0.1 port 43102 Jan 30 12:55:58.063306 sshd-session[5734]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:58.074775 systemd[1]: sshd@15-10.0.0.65:22-10.0.0.1:43102.service: Deactivated successfully. Jan 30 12:55:58.080359 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 12:55:58.082507 systemd-logind[1427]: Session 16 logged out. Waiting for processes to exit. Jan 30 12:55:58.090310 systemd[1]: Started sshd@16-10.0.0.65:22-10.0.0.1:43114.service - OpenSSH per-connection server daemon (10.0.0.1:43114). Jan 30 12:55:58.092514 systemd-logind[1427]: Removed session 16. Jan 30 12:55:58.138636 sshd[5754]: Accepted publickey for core from 10.0.0.1 port 43114 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:55:58.139957 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:58.143754 systemd-logind[1427]: New session 17 of user core. Jan 30 12:55:58.154055 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 12:55:58.482953 sshd[5757]: Connection closed by 10.0.0.1 port 43114 Jan 30 12:55:58.483870 sshd-session[5754]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:58.493775 systemd[1]: sshd@16-10.0.0.65:22-10.0.0.1:43114.service: Deactivated successfully. Jan 30 12:55:58.497944 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 12:55:58.502512 systemd-logind[1427]: Session 17 logged out. Waiting for processes to exit. Jan 30 12:55:58.513239 systemd[1]: Started sshd@17-10.0.0.65:22-10.0.0.1:43120.service - OpenSSH per-connection server daemon (10.0.0.1:43120). Jan 30 12:55:58.514694 systemd-logind[1427]: Removed session 17. Jan 30 12:55:58.555904 sshd[5768]: Accepted publickey for core from 10.0.0.1 port 43120 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:55:58.557308 sshd-session[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:55:58.561527 systemd-logind[1427]: New session 18 of user core. Jan 30 12:55:58.569083 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 12:55:58.734534 sshd[5770]: Connection closed by 10.0.0.1 port 43120 Jan 30 12:55:58.735694 sshd-session[5768]: pam_unix(sshd:session): session closed for user core Jan 30 12:55:58.742302 systemd[1]: sshd@17-10.0.0.65:22-10.0.0.1:43120.service: Deactivated successfully. Jan 30 12:55:58.744241 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 12:55:58.744873 systemd-logind[1427]: Session 18 logged out. Waiting for processes to exit. Jan 30 12:55:58.745788 systemd-logind[1427]: Removed session 18. Jan 30 12:56:02.752879 kubelet[2613]: I0130 12:56:02.752742 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:56:03.653271 containerd[1445]: time="2025-01-30T12:56:03.653209738Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\"" Jan 30 12:56:03.654077 containerd[1445]: time="2025-01-30T12:56:03.653333538Z" level=info msg="TearDown network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" successfully" Jan 30 12:56:03.654077 containerd[1445]: time="2025-01-30T12:56:03.653344698Z" level=info msg="StopPodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" returns successfully" Jan 30 12:56:03.654077 containerd[1445]: time="2025-01-30T12:56:03.653734058Z" level=info msg="RemovePodSandbox for \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\"" Jan 30 12:56:03.658164 containerd[1445]: time="2025-01-30T12:56:03.657582579Z" level=info msg="Forcibly stopping sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\"" Jan 30 12:56:03.658164 containerd[1445]: time="2025-01-30T12:56:03.657719579Z" level=info msg="TearDown network for sandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" successfully" Jan 30 12:56:03.686601 containerd[1445]: time="2025-01-30T12:56:03.686535188Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.686732 containerd[1445]: time="2025-01-30T12:56:03.686629708Z" level=info msg="RemovePodSandbox \"c62a18f762df3a975b382a8d80de419a48710e3a00123666ac4754a183e60362\" returns successfully" Jan 30 12:56:03.687185 containerd[1445]: time="2025-01-30T12:56:03.687159948Z" level=info msg="StopPodSandbox for \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\"" Jan 30 12:56:03.687287 containerd[1445]: time="2025-01-30T12:56:03.687268028Z" level=info msg="TearDown network for sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" successfully" Jan 30 12:56:03.687312 containerd[1445]: time="2025-01-30T12:56:03.687285948Z" level=info msg="StopPodSandbox for \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" returns successfully" Jan 30 12:56:03.687649 containerd[1445]: time="2025-01-30T12:56:03.687621108Z" level=info msg="RemovePodSandbox for \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\"" Jan 30 12:56:03.687683 containerd[1445]: time="2025-01-30T12:56:03.687650268Z" level=info msg="Forcibly stopping sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\"" Jan 30 12:56:03.687731 containerd[1445]: time="2025-01-30T12:56:03.687717348Z" level=info msg="TearDown network for sandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" successfully" Jan 30 12:56:03.690474 containerd[1445]: time="2025-01-30T12:56:03.690429869Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.690541 containerd[1445]: time="2025-01-30T12:56:03.690484069Z" level=info msg="RemovePodSandbox \"44ebebf248239456320a6a27e019b382c16b42232183c5ee33f7a61c2c2e59f1\" returns successfully" Jan 30 12:56:03.691550 containerd[1445]: time="2025-01-30T12:56:03.690806669Z" level=info msg="StopPodSandbox for \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\"" Jan 30 12:56:03.691550 containerd[1445]: time="2025-01-30T12:56:03.690930109Z" level=info msg="TearDown network for sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\" successfully" Jan 30 12:56:03.691550 containerd[1445]: time="2025-01-30T12:56:03.690941229Z" level=info msg="StopPodSandbox for \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\" returns successfully" Jan 30 12:56:03.691680 containerd[1445]: time="2025-01-30T12:56:03.691647990Z" level=info msg="RemovePodSandbox for \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\"" Jan 30 12:56:03.691680 containerd[1445]: time="2025-01-30T12:56:03.691672150Z" level=info msg="Forcibly stopping sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\"" Jan 30 12:56:03.691776 containerd[1445]: time="2025-01-30T12:56:03.691740870Z" level=info msg="TearDown network for sandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\" successfully" Jan 30 12:56:03.694615 containerd[1445]: time="2025-01-30T12:56:03.694569510Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.694682 containerd[1445]: time="2025-01-30T12:56:03.694633471Z" level=info msg="RemovePodSandbox \"89074cbbcaed6cee1ccc9819a2b22a62be5fa6e303eaccd87c9bb718f5c70ea3\" returns successfully" Jan 30 12:56:03.695571 containerd[1445]: time="2025-01-30T12:56:03.695015151Z" level=info msg="StopPodSandbox for \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\"" Jan 30 12:56:03.695571 containerd[1445]: time="2025-01-30T12:56:03.695116991Z" level=info msg="TearDown network for sandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\" successfully" Jan 30 12:56:03.695571 containerd[1445]: time="2025-01-30T12:56:03.695127991Z" level=info msg="StopPodSandbox for \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\" returns successfully" Jan 30 12:56:03.695571 containerd[1445]: time="2025-01-30T12:56:03.695375791Z" level=info msg="RemovePodSandbox for \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\"" Jan 30 12:56:03.695571 containerd[1445]: time="2025-01-30T12:56:03.695400991Z" level=info msg="Forcibly stopping sandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\"" Jan 30 12:56:03.695571 containerd[1445]: time="2025-01-30T12:56:03.695472671Z" level=info msg="TearDown network for sandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\" successfully" Jan 30 12:56:03.699401 containerd[1445]: time="2025-01-30T12:56:03.699168192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.699401 containerd[1445]: time="2025-01-30T12:56:03.699238792Z" level=info msg="RemovePodSandbox \"117c3650b3e147c1732d92c1cbf0f58e9a0628c4247587fcd8ddfe9130363269\" returns successfully" Jan 30 12:56:03.699734 containerd[1445]: time="2025-01-30T12:56:03.699634992Z" level=info msg="StopPodSandbox for \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\"" Jan 30 12:56:03.699734 containerd[1445]: time="2025-01-30T12:56:03.699728352Z" level=info msg="TearDown network for sandbox \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\" successfully" Jan 30 12:56:03.699819 containerd[1445]: time="2025-01-30T12:56:03.699738112Z" level=info msg="StopPodSandbox for \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\" returns successfully" Jan 30 12:56:03.701927 containerd[1445]: time="2025-01-30T12:56:03.700916512Z" level=info msg="RemovePodSandbox for \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\"" Jan 30 12:56:03.701927 containerd[1445]: time="2025-01-30T12:56:03.700972032Z" level=info msg="Forcibly stopping sandbox \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\"" Jan 30 12:56:03.701927 containerd[1445]: time="2025-01-30T12:56:03.701046472Z" level=info msg="TearDown network for sandbox \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\" successfully" Jan 30 12:56:03.705649 containerd[1445]: time="2025-01-30T12:56:03.705584434Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.705726 containerd[1445]: time="2025-01-30T12:56:03.705695314Z" level=info msg="RemovePodSandbox \"d85f509cf8c878cf7b54cd734a504af5a67d7ef59db552ce4cc4a749072bf634\" returns successfully" Jan 30 12:56:03.706114 containerd[1445]: time="2025-01-30T12:56:03.706067474Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\"" Jan 30 12:56:03.706166 containerd[1445]: time="2025-01-30T12:56:03.706155354Z" level=info msg="TearDown network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" successfully" Jan 30 12:56:03.706216 containerd[1445]: time="2025-01-30T12:56:03.706166274Z" level=info msg="StopPodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" returns successfully" Jan 30 12:56:03.706543 containerd[1445]: time="2025-01-30T12:56:03.706488714Z" level=info msg="RemovePodSandbox for \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\"" Jan 30 12:56:03.706543 containerd[1445]: time="2025-01-30T12:56:03.706518674Z" level=info msg="Forcibly stopping sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\"" Jan 30 12:56:03.706639 containerd[1445]: time="2025-01-30T12:56:03.706600674Z" level=info msg="TearDown network for sandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" successfully" Jan 30 12:56:03.709441 containerd[1445]: time="2025-01-30T12:56:03.709399195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.709528 containerd[1445]: time="2025-01-30T12:56:03.709466075Z" level=info msg="RemovePodSandbox \"165847c6c80a582d97a050c72f25c511fd213e028f2181a8de23e24402ded3fc\" returns successfully" Jan 30 12:56:03.710169 containerd[1445]: time="2025-01-30T12:56:03.709873595Z" level=info msg="StopPodSandbox for \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\"" Jan 30 12:56:03.710169 containerd[1445]: time="2025-01-30T12:56:03.710093635Z" level=info msg="TearDown network for sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" successfully" Jan 30 12:56:03.710169 containerd[1445]: time="2025-01-30T12:56:03.710107275Z" level=info msg="StopPodSandbox for \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" returns successfully" Jan 30 12:56:03.710583 containerd[1445]: time="2025-01-30T12:56:03.710512075Z" level=info msg="RemovePodSandbox for \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\"" Jan 30 12:56:03.710583 containerd[1445]: time="2025-01-30T12:56:03.710539755Z" level=info msg="Forcibly stopping sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\"" Jan 30 12:56:03.710664 containerd[1445]: time="2025-01-30T12:56:03.710633235Z" level=info msg="TearDown network for sandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" successfully" Jan 30 12:56:03.713313 containerd[1445]: time="2025-01-30T12:56:03.713221156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.713579 containerd[1445]: time="2025-01-30T12:56:03.713331076Z" level=info msg="RemovePodSandbox \"33dd78dbbf05e7b6863f25108bfabd1d6e685397aac4895b3a14bfc017ec4c80\" returns successfully" Jan 30 12:56:03.713658 containerd[1445]: time="2025-01-30T12:56:03.713624196Z" level=info msg="StopPodSandbox for \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\"" Jan 30 12:56:03.713731 containerd[1445]: time="2025-01-30T12:56:03.713709436Z" level=info msg="TearDown network for sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\" successfully" Jan 30 12:56:03.713731 containerd[1445]: time="2025-01-30T12:56:03.713725716Z" level=info msg="StopPodSandbox for \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\" returns successfully" Jan 30 12:56:03.714505 containerd[1445]: time="2025-01-30T12:56:03.714442917Z" level=info msg="RemovePodSandbox for \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\"" Jan 30 12:56:03.714505 containerd[1445]: time="2025-01-30T12:56:03.714466877Z" level=info msg="Forcibly stopping sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\"" Jan 30 12:56:03.714505 containerd[1445]: time="2025-01-30T12:56:03.714543917Z" level=info msg="TearDown network for sandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\" successfully" Jan 30 12:56:03.717264 containerd[1445]: time="2025-01-30T12:56:03.717199197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.717264 containerd[1445]: time="2025-01-30T12:56:03.717256477Z" level=info msg="RemovePodSandbox \"38f03efbf71f72004e3b45557bc585de1ce10daa14608bed3fc376e47bbf1561\" returns successfully" Jan 30 12:56:03.717712 containerd[1445]: time="2025-01-30T12:56:03.717605397Z" level=info msg="StopPodSandbox for \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\"" Jan 30 12:56:03.717712 containerd[1445]: time="2025-01-30T12:56:03.717698437Z" level=info msg="TearDown network for sandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\" successfully" Jan 30 12:56:03.717712 containerd[1445]: time="2025-01-30T12:56:03.717708838Z" level=info msg="StopPodSandbox for \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\" returns successfully" Jan 30 12:56:03.719538 containerd[1445]: time="2025-01-30T12:56:03.718115478Z" level=info msg="RemovePodSandbox for \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\"" Jan 30 12:56:03.719538 containerd[1445]: time="2025-01-30T12:56:03.718144518Z" level=info msg="Forcibly stopping sandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\"" Jan 30 12:56:03.719538 containerd[1445]: time="2025-01-30T12:56:03.718217998Z" level=info msg="TearDown network for sandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\" successfully" Jan 30 12:56:03.721314 containerd[1445]: time="2025-01-30T12:56:03.721279039Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.721529 containerd[1445]: time="2025-01-30T12:56:03.721508719Z" level=info msg="RemovePodSandbox \"7722952a406f90873a8888e7ba1e60ab866224cf7df6b237d8a32dd57f4103d7\" returns successfully" Jan 30 12:56:03.722000 containerd[1445]: time="2025-01-30T12:56:03.721973759Z" level=info msg="StopPodSandbox for \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\"" Jan 30 12:56:03.722074 containerd[1445]: time="2025-01-30T12:56:03.722064919Z" level=info msg="TearDown network for sandbox \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\" successfully" Jan 30 12:56:03.722109 containerd[1445]: time="2025-01-30T12:56:03.722076439Z" level=info msg="StopPodSandbox for \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\" returns successfully" Jan 30 12:56:03.722943 containerd[1445]: time="2025-01-30T12:56:03.722398839Z" level=info msg="RemovePodSandbox for \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\"" Jan 30 12:56:03.722943 containerd[1445]: time="2025-01-30T12:56:03.722430999Z" level=info msg="Forcibly stopping sandbox \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\"" Jan 30 12:56:03.722943 containerd[1445]: time="2025-01-30T12:56:03.722500479Z" level=info msg="TearDown network for sandbox \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\" successfully" Jan 30 12:56:03.725451 containerd[1445]: time="2025-01-30T12:56:03.725414160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.725594 containerd[1445]: time="2025-01-30T12:56:03.725575520Z" level=info msg="RemovePodSandbox \"e746fec82264c1da9c7f69e1140f6276b50aeab4ac83cc570990168f3204f0cf\" returns successfully" Jan 30 12:56:03.726077 containerd[1445]: time="2025-01-30T12:56:03.726049920Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\"" Jan 30 12:56:03.726326 containerd[1445]: time="2025-01-30T12:56:03.726305880Z" level=info msg="TearDown network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" successfully" Jan 30 12:56:03.726358 containerd[1445]: time="2025-01-30T12:56:03.726327040Z" level=info msg="StopPodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" returns successfully" Jan 30 12:56:03.726664 containerd[1445]: time="2025-01-30T12:56:03.726635280Z" level=info msg="RemovePodSandbox for \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\"" Jan 30 12:56:03.726696 containerd[1445]: time="2025-01-30T12:56:03.726661600Z" level=info msg="Forcibly stopping sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\"" Jan 30 12:56:03.726773 containerd[1445]: time="2025-01-30T12:56:03.726728480Z" level=info msg="TearDown network for sandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" successfully" Jan 30 12:56:03.729430 containerd[1445]: time="2025-01-30T12:56:03.729386321Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.729488 containerd[1445]: time="2025-01-30T12:56:03.729451481Z" level=info msg="RemovePodSandbox \"0c38244b09e86e908d73f3611193148721bf9bd33747ed52138c7a6b449dc1a9\" returns successfully" Jan 30 12:56:03.729971 containerd[1445]: time="2025-01-30T12:56:03.729935761Z" level=info msg="StopPodSandbox for \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\"" Jan 30 12:56:03.730067 containerd[1445]: time="2025-01-30T12:56:03.730050441Z" level=info msg="TearDown network for sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" successfully" Jan 30 12:56:03.730067 containerd[1445]: time="2025-01-30T12:56:03.730065001Z" level=info msg="StopPodSandbox for \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" returns successfully" Jan 30 12:56:03.730448 containerd[1445]: time="2025-01-30T12:56:03.730408001Z" level=info msg="RemovePodSandbox for \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\"" Jan 30 12:56:03.730448 containerd[1445]: time="2025-01-30T12:56:03.730435521Z" level=info msg="Forcibly stopping sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\"" Jan 30 12:56:03.730515 containerd[1445]: time="2025-01-30T12:56:03.730501201Z" level=info msg="TearDown network for sandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" successfully" Jan 30 12:56:03.734626 containerd[1445]: time="2025-01-30T12:56:03.733490962Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.734626 containerd[1445]: time="2025-01-30T12:56:03.733622762Z" level=info msg="RemovePodSandbox \"c7a143905650f852e07cad3df364d786b76b596b3336b14ae58526e81cbe53d8\" returns successfully" Jan 30 12:56:03.735145 containerd[1445]: time="2025-01-30T12:56:03.735111243Z" level=info msg="StopPodSandbox for \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\"" Jan 30 12:56:03.735263 containerd[1445]: time="2025-01-30T12:56:03.735242523Z" level=info msg="TearDown network for sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\" successfully" Jan 30 12:56:03.735321 containerd[1445]: time="2025-01-30T12:56:03.735257843Z" level=info msg="StopPodSandbox for \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\" returns successfully" Jan 30 12:56:03.735619 containerd[1445]: time="2025-01-30T12:56:03.735595603Z" level=info msg="RemovePodSandbox for \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\"" Jan 30 12:56:03.735715 containerd[1445]: time="2025-01-30T12:56:03.735621163Z" level=info msg="Forcibly stopping sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\"" Jan 30 12:56:03.735715 containerd[1445]: time="2025-01-30T12:56:03.735696763Z" level=info msg="TearDown network for sandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\" successfully" Jan 30 12:56:03.738665 containerd[1445]: time="2025-01-30T12:56:03.738618644Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.738739 containerd[1445]: time="2025-01-30T12:56:03.738676564Z" level=info msg="RemovePodSandbox \"c63171b5a22944f2e8ec9ca8ad896b8fa4be9af1311037c6928d00296306f5b7\" returns successfully" Jan 30 12:56:03.739167 containerd[1445]: time="2025-01-30T12:56:03.739111404Z" level=info msg="StopPodSandbox for \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\"" Jan 30 12:56:03.739254 containerd[1445]: time="2025-01-30T12:56:03.739242044Z" level=info msg="TearDown network for sandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\" successfully" Jan 30 12:56:03.739291 containerd[1445]: time="2025-01-30T12:56:03.739254124Z" level=info msg="StopPodSandbox for \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\" returns successfully" Jan 30 12:56:03.739529 containerd[1445]: time="2025-01-30T12:56:03.739505844Z" level=info msg="RemovePodSandbox for \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\"" Jan 30 12:56:03.739590 containerd[1445]: time="2025-01-30T12:56:03.739532644Z" level=info msg="Forcibly stopping sandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\"" Jan 30 12:56:03.739616 containerd[1445]: time="2025-01-30T12:56:03.739599644Z" level=info msg="TearDown network for sandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\" successfully" Jan 30 12:56:03.742046 containerd[1445]: time="2025-01-30T12:56:03.742004525Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.742097 containerd[1445]: time="2025-01-30T12:56:03.742071685Z" level=info msg="RemovePodSandbox \"e442efa806d74ead9b55657a07a4fc4980f250a00bde0579922198bdc1609fc7\" returns successfully" Jan 30 12:56:03.742498 containerd[1445]: time="2025-01-30T12:56:03.742470045Z" level=info msg="StopPodSandbox for \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\"" Jan 30 12:56:03.742584 containerd[1445]: time="2025-01-30T12:56:03.742567365Z" level=info msg="TearDown network for sandbox \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\" successfully" Jan 30 12:56:03.742584 containerd[1445]: time="2025-01-30T12:56:03.742580685Z" level=info msg="StopPodSandbox for \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\" returns successfully" Jan 30 12:56:03.742892 containerd[1445]: time="2025-01-30T12:56:03.742863045Z" level=info msg="RemovePodSandbox for \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\"" Jan 30 12:56:03.742966 containerd[1445]: time="2025-01-30T12:56:03.742942085Z" level=info msg="Forcibly stopping sandbox \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\"" Jan 30 12:56:03.743044 containerd[1445]: time="2025-01-30T12:56:03.743029405Z" level=info msg="TearDown network for sandbox \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\" successfully" Jan 30 12:56:03.747722 containerd[1445]: time="2025-01-30T12:56:03.747541007Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.747722 containerd[1445]: time="2025-01-30T12:56:03.747634087Z" level=info msg="RemovePodSandbox \"fd51ca735889c6c535695f0df2de46eade074fbe26d19f4582ee8b5da95256ec\" returns successfully" Jan 30 12:56:03.748186 containerd[1445]: time="2025-01-30T12:56:03.748144487Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\"" Jan 30 12:56:03.748259 containerd[1445]: time="2025-01-30T12:56:03.748242327Z" level=info msg="TearDown network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" successfully" Jan 30 12:56:03.748323 containerd[1445]: time="2025-01-30T12:56:03.748255847Z" level=info msg="StopPodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" returns successfully" Jan 30 12:56:03.748929 containerd[1445]: time="2025-01-30T12:56:03.748803127Z" level=info msg="RemovePodSandbox for \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\"" Jan 30 12:56:03.748929 containerd[1445]: time="2025-01-30T12:56:03.748829687Z" level=info msg="Forcibly stopping sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\"" Jan 30 12:56:03.748929 containerd[1445]: time="2025-01-30T12:56:03.748913887Z" level=info msg="TearDown network for sandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" successfully" Jan 30 12:56:03.749370 systemd[1]: Started sshd@18-10.0.0.65:22-10.0.0.1:35634.service - OpenSSH per-connection server daemon (10.0.0.1:35634). Jan 30 12:56:03.751651 containerd[1445]: time="2025-01-30T12:56:03.751609688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.751727 containerd[1445]: time="2025-01-30T12:56:03.751674008Z" level=info msg="RemovePodSandbox \"8f23e07bd3dad2df6c7696406882419cda1752a3dca95c34d2b0e92c027de9ea\" returns successfully" Jan 30 12:56:03.752082 containerd[1445]: time="2025-01-30T12:56:03.752049808Z" level=info msg="StopPodSandbox for \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\"" Jan 30 12:56:03.752159 containerd[1445]: time="2025-01-30T12:56:03.752139928Z" level=info msg="TearDown network for sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" successfully" Jan 30 12:56:03.752192 containerd[1445]: time="2025-01-30T12:56:03.752156888Z" level=info msg="StopPodSandbox for \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" returns successfully" Jan 30 12:56:03.752457 containerd[1445]: time="2025-01-30T12:56:03.752427368Z" level=info msg="RemovePodSandbox for \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\"" Jan 30 12:56:03.752481 containerd[1445]: time="2025-01-30T12:56:03.752457928Z" level=info msg="Forcibly stopping sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\"" Jan 30 12:56:03.752535 containerd[1445]: time="2025-01-30T12:56:03.752522488Z" level=info msg="TearDown network for sandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" successfully" Jan 30 12:56:03.755130 containerd[1445]: time="2025-01-30T12:56:03.755079809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.755241 containerd[1445]: time="2025-01-30T12:56:03.755151609Z" level=info msg="RemovePodSandbox \"63ee5917f330f6e4735d8352324a9f12b75d624dd6c7e02a10d9c6b40150c69b\" returns successfully" Jan 30 12:56:03.755687 containerd[1445]: time="2025-01-30T12:56:03.755585569Z" level=info msg="StopPodSandbox for \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\"" Jan 30 12:56:03.755687 containerd[1445]: time="2025-01-30T12:56:03.755683129Z" level=info msg="TearDown network for sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\" successfully" Jan 30 12:56:03.755767 containerd[1445]: time="2025-01-30T12:56:03.755692449Z" level=info msg="StopPodSandbox for \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\" returns successfully" Jan 30 12:56:03.756095 containerd[1445]: time="2025-01-30T12:56:03.756041369Z" level=info msg="RemovePodSandbox for \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\"" Jan 30 12:56:03.756157 containerd[1445]: time="2025-01-30T12:56:03.756096369Z" level=info msg="Forcibly stopping sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\"" Jan 30 12:56:03.756179 containerd[1445]: time="2025-01-30T12:56:03.756169209Z" level=info msg="TearDown network for sandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\" successfully" Jan 30 12:56:03.759137 containerd[1445]: time="2025-01-30T12:56:03.759092290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.759200 containerd[1445]: time="2025-01-30T12:56:03.759156290Z" level=info msg="RemovePodSandbox \"906cfeb9650e98aec1e355664aa904e943039e3ec3680735925aa93359ff5e88\" returns successfully" Jan 30 12:56:03.759532 containerd[1445]: time="2025-01-30T12:56:03.759494610Z" level=info msg="StopPodSandbox for \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\"" Jan 30 12:56:03.759619 containerd[1445]: time="2025-01-30T12:56:03.759604370Z" level=info msg="TearDown network for sandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\" successfully" Jan 30 12:56:03.759642 containerd[1445]: time="2025-01-30T12:56:03.759618810Z" level=info msg="StopPodSandbox for \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\" returns successfully" Jan 30 12:56:03.759961 containerd[1445]: time="2025-01-30T12:56:03.759937090Z" level=info msg="RemovePodSandbox for \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\"" Jan 30 12:56:03.759997 containerd[1445]: time="2025-01-30T12:56:03.759967530Z" level=info msg="Forcibly stopping sandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\"" Jan 30 12:56:03.760084 containerd[1445]: time="2025-01-30T12:56:03.760063530Z" level=info msg="TearDown network for sandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\" successfully" Jan 30 12:56:03.762736 containerd[1445]: time="2025-01-30T12:56:03.762676491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.762841 containerd[1445]: time="2025-01-30T12:56:03.762756891Z" level=info msg="RemovePodSandbox \"9fd6f3a1dc05a09fbf7ac0cd3c27d223a91b12f2033c34a5ddf6027e7b3fdb9d\" returns successfully" Jan 30 12:56:03.763163 containerd[1445]: time="2025-01-30T12:56:03.763136131Z" level=info msg="StopPodSandbox for \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\"" Jan 30 12:56:03.763403 containerd[1445]: time="2025-01-30T12:56:03.763328011Z" level=info msg="TearDown network for sandbox \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\" successfully" Jan 30 12:56:03.763403 containerd[1445]: time="2025-01-30T12:56:03.763344491Z" level=info msg="StopPodSandbox for \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\" returns successfully" Jan 30 12:56:03.763670 containerd[1445]: time="2025-01-30T12:56:03.763646331Z" level=info msg="RemovePodSandbox for \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\"" Jan 30 12:56:03.763670 containerd[1445]: time="2025-01-30T12:56:03.763674331Z" level=info msg="Forcibly stopping sandbox \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\"" Jan 30 12:56:03.763841 containerd[1445]: time="2025-01-30T12:56:03.763819771Z" level=info msg="TearDown network for sandbox \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\" successfully" Jan 30 12:56:03.766660 containerd[1445]: time="2025-01-30T12:56:03.766618852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.766734 containerd[1445]: time="2025-01-30T12:56:03.766684252Z" level=info msg="RemovePodSandbox \"4b52b9515db624f6ffb3d7cb9d6a4fc2a6be137aa587858689fe82b954fa8ce4\" returns successfully" Jan 30 12:56:03.768563 containerd[1445]: time="2025-01-30T12:56:03.768169693Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\"" Jan 30 12:56:03.768563 containerd[1445]: time="2025-01-30T12:56:03.768276173Z" level=info msg="TearDown network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" successfully" Jan 30 12:56:03.768563 containerd[1445]: time="2025-01-30T12:56:03.768286053Z" level=info msg="StopPodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" returns successfully" Jan 30 12:56:03.769057 containerd[1445]: time="2025-01-30T12:56:03.768904053Z" level=info msg="RemovePodSandbox for \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\"" Jan 30 12:56:03.769057 containerd[1445]: time="2025-01-30T12:56:03.768928933Z" level=info msg="Forcibly stopping sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\"" Jan 30 12:56:03.769057 containerd[1445]: time="2025-01-30T12:56:03.769006093Z" level=info msg="TearDown network for sandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" successfully" Jan 30 12:56:03.773842 containerd[1445]: time="2025-01-30T12:56:03.773678574Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.773842 containerd[1445]: time="2025-01-30T12:56:03.773743734Z" level=info msg="RemovePodSandbox \"60044e7cbe9619596cf61ba386583db9d379297a56817137ab40dc3f4d6a7b62\" returns successfully" Jan 30 12:56:03.774252 containerd[1445]: time="2025-01-30T12:56:03.774212375Z" level=info msg="StopPodSandbox for \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\"" Jan 30 12:56:03.774450 containerd[1445]: time="2025-01-30T12:56:03.774393415Z" level=info msg="TearDown network for sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" successfully" Jan 30 12:56:03.774450 containerd[1445]: time="2025-01-30T12:56:03.774422895Z" level=info msg="StopPodSandbox for \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" returns successfully" Jan 30 12:56:03.774998 containerd[1445]: time="2025-01-30T12:56:03.774956015Z" level=info msg="RemovePodSandbox for \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\"" Jan 30 12:56:03.774998 containerd[1445]: time="2025-01-30T12:56:03.774992575Z" level=info msg="Forcibly stopping sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\"" Jan 30 12:56:03.775076 containerd[1445]: time="2025-01-30T12:56:03.775060495Z" level=info msg="TearDown network for sandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" successfully" Jan 30 12:56:03.777622 containerd[1445]: time="2025-01-30T12:56:03.777585976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.777681 containerd[1445]: time="2025-01-30T12:56:03.777646216Z" level=info msg="RemovePodSandbox \"685702b07ef91fc7d4abbc6c6c68281468614b9e002e27e293861046a454a6b8\" returns successfully" Jan 30 12:56:03.778089 containerd[1445]: time="2025-01-30T12:56:03.778066576Z" level=info msg="StopPodSandbox for \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\"" Jan 30 12:56:03.778332 containerd[1445]: time="2025-01-30T12:56:03.778288256Z" level=info msg="TearDown network for sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\" successfully" Jan 30 12:56:03.778332 containerd[1445]: time="2025-01-30T12:56:03.778306256Z" level=info msg="StopPodSandbox for \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\" returns successfully" Jan 30 12:56:03.778656 containerd[1445]: time="2025-01-30T12:56:03.778628776Z" level=info msg="RemovePodSandbox for \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\"" Jan 30 12:56:03.778691 containerd[1445]: time="2025-01-30T12:56:03.778661096Z" level=info msg="Forcibly stopping sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\"" Jan 30 12:56:03.779724 containerd[1445]: time="2025-01-30T12:56:03.778739536Z" level=info msg="TearDown network for sandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\" successfully" Jan 30 12:56:03.781251 containerd[1445]: time="2025-01-30T12:56:03.781189497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.781251 containerd[1445]: time="2025-01-30T12:56:03.781252337Z" level=info msg="RemovePodSandbox \"71027b27d61be92fe38af28da89b754eda3d03132cf3d25de11b72d8f5908f85\" returns successfully" Jan 30 12:56:03.781923 containerd[1445]: time="2025-01-30T12:56:03.781616697Z" level=info msg="StopPodSandbox for \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\"" Jan 30 12:56:03.781923 containerd[1445]: time="2025-01-30T12:56:03.781713657Z" level=info msg="TearDown network for sandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\" successfully" Jan 30 12:56:03.781923 containerd[1445]: time="2025-01-30T12:56:03.781722337Z" level=info msg="StopPodSandbox for \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\" returns successfully" Jan 30 12:56:03.782076 containerd[1445]: time="2025-01-30T12:56:03.782042097Z" level=info msg="RemovePodSandbox for \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\"" Jan 30 12:56:03.782076 containerd[1445]: time="2025-01-30T12:56:03.782068337Z" level=info msg="Forcibly stopping sandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\"" Jan 30 12:56:03.782190 containerd[1445]: time="2025-01-30T12:56:03.782138057Z" level=info msg="TearDown network for sandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\" successfully" Jan 30 12:56:03.784507 containerd[1445]: time="2025-01-30T12:56:03.784459698Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.784567 containerd[1445]: time="2025-01-30T12:56:03.784520098Z" level=info msg="RemovePodSandbox \"f6e4de9a5c35801f94db407ff176b9bdd36ef07db474989981fa076f65aadf52\" returns successfully" Jan 30 12:56:03.785161 containerd[1445]: time="2025-01-30T12:56:03.784981538Z" level=info msg="StopPodSandbox for \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\"" Jan 30 12:56:03.785161 containerd[1445]: time="2025-01-30T12:56:03.785075218Z" level=info msg="TearDown network for sandbox \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\" successfully" Jan 30 12:56:03.785161 containerd[1445]: time="2025-01-30T12:56:03.785086818Z" level=info msg="StopPodSandbox for \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\" returns successfully" Jan 30 12:56:03.786111 containerd[1445]: time="2025-01-30T12:56:03.785487138Z" level=info msg="RemovePodSandbox for \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\"" Jan 30 12:56:03.786111 containerd[1445]: time="2025-01-30T12:56:03.785512458Z" level=info msg="Forcibly stopping sandbox \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\"" Jan 30 12:56:03.786111 containerd[1445]: time="2025-01-30T12:56:03.785575578Z" level=info msg="TearDown network for sandbox \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\" successfully" Jan 30 12:56:03.788043 containerd[1445]: time="2025-01-30T12:56:03.787986379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.788109 containerd[1445]: time="2025-01-30T12:56:03.788048339Z" level=info msg="RemovePodSandbox \"2617c260b0c6447460d2393c6a4a7bbb44278dd4f6d43b04186db9fee6b16a4d\" returns successfully" Jan 30 12:56:03.789586 containerd[1445]: time="2025-01-30T12:56:03.789266779Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\"" Jan 30 12:56:03.789586 containerd[1445]: time="2025-01-30T12:56:03.789453859Z" level=info msg="TearDown network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" successfully" Jan 30 12:56:03.789586 containerd[1445]: time="2025-01-30T12:56:03.789464259Z" level=info msg="StopPodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" returns successfully" Jan 30 12:56:03.789809 containerd[1445]: time="2025-01-30T12:56:03.789784539Z" level=info msg="RemovePodSandbox for \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\"" Jan 30 12:56:03.789836 containerd[1445]: time="2025-01-30T12:56:03.789814819Z" level=info msg="Forcibly stopping sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\"" Jan 30 12:56:03.790570 containerd[1445]: time="2025-01-30T12:56:03.789881659Z" level=info msg="TearDown network for sandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" successfully" Jan 30 12:56:03.792362 containerd[1445]: time="2025-01-30T12:56:03.792300820Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.792362 containerd[1445]: time="2025-01-30T12:56:03.792364580Z" level=info msg="RemovePodSandbox \"4b209b34a037b15a964276f7347c6f027f6cba078b0e78e46cb401691713e5b5\" returns successfully" Jan 30 12:56:03.793267 containerd[1445]: time="2025-01-30T12:56:03.793205540Z" level=info msg="StopPodSandbox for \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\"" Jan 30 12:56:03.793621 containerd[1445]: time="2025-01-30T12:56:03.793431100Z" level=info msg="TearDown network for sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" successfully" Jan 30 12:56:03.793621 containerd[1445]: time="2025-01-30T12:56:03.793448380Z" level=info msg="StopPodSandbox for \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" returns successfully" Jan 30 12:56:03.794223 containerd[1445]: time="2025-01-30T12:56:03.794129901Z" level=info msg="RemovePodSandbox for \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\"" Jan 30 12:56:03.795039 containerd[1445]: time="2025-01-30T12:56:03.794992741Z" level=info msg="Forcibly stopping sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\"" Jan 30 12:56:03.795682 containerd[1445]: time="2025-01-30T12:56:03.795351781Z" level=info msg="TearDown network for sandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" successfully" Jan 30 12:56:03.799183 containerd[1445]: time="2025-01-30T12:56:03.799018302Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.799183 containerd[1445]: time="2025-01-30T12:56:03.799085982Z" level=info msg="RemovePodSandbox \"b9ca4af46a309552b4435c4d8fc3bc572d20a1e52917657fd8fbc7a0bc8cdd03\" returns successfully" Jan 30 12:56:03.800160 containerd[1445]: time="2025-01-30T12:56:03.799633062Z" level=info msg="StopPodSandbox for \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\"" Jan 30 12:56:03.800160 containerd[1445]: time="2025-01-30T12:56:03.799922582Z" level=info msg="TearDown network for sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\" successfully" Jan 30 12:56:03.800160 containerd[1445]: time="2025-01-30T12:56:03.799939022Z" level=info msg="StopPodSandbox for \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\" returns successfully" Jan 30 12:56:03.800950 containerd[1445]: time="2025-01-30T12:56:03.800783943Z" level=info msg="RemovePodSandbox for \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\"" Jan 30 12:56:03.800950 containerd[1445]: time="2025-01-30T12:56:03.800816543Z" level=info msg="Forcibly stopping sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\"" Jan 30 12:56:03.800950 containerd[1445]: time="2025-01-30T12:56:03.800881743Z" level=info msg="TearDown network for sandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\" successfully" Jan 30 12:56:03.803625 containerd[1445]: time="2025-01-30T12:56:03.803480863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.803625 containerd[1445]: time="2025-01-30T12:56:03.803539783Z" level=info msg="RemovePodSandbox \"a8e22b4da86793fe84e52fe57d5b7190e9c3cb882bd6016ddd68fc4695b77361\" returns successfully" Jan 30 12:56:03.805851 containerd[1445]: time="2025-01-30T12:56:03.805399104Z" level=info msg="StopPodSandbox for \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\"" Jan 30 12:56:03.805851 containerd[1445]: time="2025-01-30T12:56:03.805507264Z" level=info msg="TearDown network for sandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\" successfully" Jan 30 12:56:03.805851 containerd[1445]: time="2025-01-30T12:56:03.805518184Z" level=info msg="StopPodSandbox for \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\" returns successfully" Jan 30 12:56:03.809005 containerd[1445]: time="2025-01-30T12:56:03.807070865Z" level=info msg="RemovePodSandbox for \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\"" Jan 30 12:56:03.809005 containerd[1445]: time="2025-01-30T12:56:03.807103825Z" level=info msg="Forcibly stopping sandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\"" Jan 30 12:56:03.809005 containerd[1445]: time="2025-01-30T12:56:03.807184825Z" level=info msg="TearDown network for sandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\" successfully" Jan 30 12:56:03.811689 containerd[1445]: time="2025-01-30T12:56:03.811650186Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.811949 containerd[1445]: time="2025-01-30T12:56:03.811924586Z" level=info msg="RemovePodSandbox \"13fd97398d9158d09f876902fca137156b7e3b3a638c833142ca7ac5b6f43f25\" returns successfully" Jan 30 12:56:03.812515 containerd[1445]: time="2025-01-30T12:56:03.812472586Z" level=info msg="StopPodSandbox for \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\"" Jan 30 12:56:03.813005 containerd[1445]: time="2025-01-30T12:56:03.812974026Z" level=info msg="TearDown network for sandbox \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\" successfully" Jan 30 12:56:03.813005 containerd[1445]: time="2025-01-30T12:56:03.812998666Z" level=info msg="StopPodSandbox for \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\" returns successfully" Jan 30 12:56:03.814192 containerd[1445]: time="2025-01-30T12:56:03.813399546Z" level=info msg="RemovePodSandbox for \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\"" Jan 30 12:56:03.814192 containerd[1445]: time="2025-01-30T12:56:03.813428386Z" level=info msg="Forcibly stopping sandbox \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\"" Jan 30 12:56:03.814192 containerd[1445]: time="2025-01-30T12:56:03.813508466Z" level=info msg="TearDown network for sandbox \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\" successfully" Jan 30 12:56:03.816551 containerd[1445]: time="2025-01-30T12:56:03.816492107Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:56:03.816709 containerd[1445]: time="2025-01-30T12:56:03.816692627Z" level=info msg="RemovePodSandbox \"159a41e7472f1a61290e121c8df57b79a71620e6c591f1b99bb0ca8679ccd35f\" returns successfully" Jan 30 12:56:03.829041 sshd[5790]: Accepted publickey for core from 10.0.0.1 port 35634 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:56:03.831143 sshd-session[5790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:03.836820 systemd-logind[1427]: New session 19 of user core. Jan 30 12:56:03.851120 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 12:56:03.992139 sshd[5792]: Connection closed by 10.0.0.1 port 35634 Jan 30 12:56:03.992633 sshd-session[5790]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:03.996646 systemd[1]: sshd@18-10.0.0.65:22-10.0.0.1:35634.service: Deactivated successfully. Jan 30 12:56:03.998632 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 12:56:04.000600 systemd-logind[1427]: Session 19 logged out. Waiting for processes to exit. Jan 30 12:56:04.001479 systemd-logind[1427]: Removed session 19. Jan 30 12:56:07.357351 kubelet[2613]: E0130 12:56:07.357299 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:09.008231 systemd[1]: Started sshd@19-10.0.0.65:22-10.0.0.1:35708.service - OpenSSH per-connection server daemon (10.0.0.1:35708). Jan 30 12:56:09.066698 sshd[5857]: Accepted publickey for core from 10.0.0.1 port 35708 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:56:09.068307 sshd-session[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:09.073460 systemd-logind[1427]: New session 20 of user core. Jan 30 12:56:09.084102 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 12:56:09.254275 sshd[5859]: Connection closed by 10.0.0.1 port 35708 Jan 30 12:56:09.254653 sshd-session[5857]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:09.257176 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 12:56:09.258599 systemd[1]: sshd@19-10.0.0.65:22-10.0.0.1:35708.service: Deactivated successfully. Jan 30 12:56:09.262438 systemd-logind[1427]: Session 20 logged out. Waiting for processes to exit. Jan 30 12:56:09.264037 systemd-logind[1427]: Removed session 20. Jan 30 12:56:11.658733 kubelet[2613]: E0130 12:56:11.658638 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:56:14.268966 systemd[1]: Started sshd@20-10.0.0.65:22-10.0.0.1:37844.service - OpenSSH per-connection server daemon (10.0.0.1:37844). Jan 30 12:56:14.330309 sshd[5874]: Accepted publickey for core from 10.0.0.1 port 37844 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 12:56:14.332674 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:56:14.340912 systemd-logind[1427]: New session 21 of user core. Jan 30 12:56:14.349115 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 12:56:14.514584 sshd[5876]: Connection closed by 10.0.0.1 port 37844 Jan 30 12:56:14.515123 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Jan 30 12:56:14.519346 systemd[1]: sshd@20-10.0.0.65:22-10.0.0.1:37844.service: Deactivated successfully. Jan 30 12:56:14.523647 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 12:56:14.526039 systemd-logind[1427]: Session 21 logged out. Waiting for processes to exit. Jan 30 12:56:14.528987 systemd-logind[1427]: Removed session 21.