Jan 13 20:30:22.918599 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:30:22.918620 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:30:22.918630 kernel: KASLR enabled Jan 13 20:30:22.918635 kernel: efi: EFI v2.7 by EDK II Jan 13 20:30:22.918641 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 13 20:30:22.918647 kernel: random: crng init done Jan 13 20:30:22.918654 kernel: secureboot: Secure boot disabled Jan 13 20:30:22.918659 kernel: ACPI: Early table checksum verification disabled Jan 13 20:30:22.918666 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 13 20:30:22.918673 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:30:22.918679 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:30:22.918685 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:30:22.918690 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:30:22.918696 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:30:22.918703 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:30:22.918711 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:30:22.918717 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:30:22.918723 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:30:22.918730 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:30:22.918736 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 20:30:22.918742 kernel: NUMA: Failed to initialise from firmware Jan 13 20:30:22.918748 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:30:22.918754 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 13 20:30:22.918761 kernel: Zone ranges: Jan 13 20:30:22.918767 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:30:22.918774 kernel: DMA32 empty Jan 13 20:30:22.918780 kernel: Normal empty Jan 13 20:30:22.918787 kernel: Movable zone start for each node Jan 13 20:30:22.918793 kernel: Early memory node ranges Jan 13 20:30:22.918799 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 20:30:22.918805 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 20:30:22.918816 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 20:30:22.918822 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 20:30:22.918828 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 20:30:22.918834 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 20:30:22.918840 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 20:30:22.918846 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:30:22.918854 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 20:30:22.918860 kernel: psci: probing for conduit method from ACPI. Jan 13 20:30:22.918866 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:30:22.918875 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:30:22.918881 kernel: psci: Trusted OS migration not required Jan 13 20:30:22.918888 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:30:22.918896 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:30:22.918903 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:30:22.918910 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:30:22.918916 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 20:30:22.918923 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:30:22.918930 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:30:22.918936 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:30:22.918943 kernel: CPU features: detected: Spectre-v4 Jan 13 20:30:22.918950 kernel: CPU features: detected: Spectre-BHB Jan 13 20:30:22.918956 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:30:22.918964 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:30:22.918971 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:30:22.918977 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:30:22.918984 kernel: alternatives: applying boot alternatives Jan 13 20:30:22.918991 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:30:22.918998 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:30:22.919005 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:30:22.919011 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:30:22.919018 kernel: Fallback order for Node 0: 0 Jan 13 20:30:22.919024 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 20:30:22.919031 kernel: Policy zone: DMA Jan 13 20:30:22.919039 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:30:22.919046 kernel: software IO TLB: area num 4. Jan 13 20:30:22.919052 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 20:30:22.919059 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Jan 13 20:30:22.919066 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:30:22.919072 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:30:22.919079 kernel: rcu: RCU event tracing is enabled. Jan 13 20:30:22.919086 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:30:22.919093 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:30:22.919100 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:30:22.919106 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:30:22.919113 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:30:22.919121 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:30:22.919127 kernel: GICv3: 256 SPIs implemented Jan 13 20:30:22.919134 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:30:22.919140 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:30:22.919147 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:30:22.919153 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:30:22.919160 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:30:22.919167 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:30:22.919173 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:30:22.919180 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 20:30:22.919186 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 20:30:22.919194 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:30:22.919201 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:30:22.919207 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:30:22.919214 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:30:22.919221 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:30:22.919227 kernel: arm-pv: using stolen time PV Jan 13 20:30:22.919234 kernel: Console: colour dummy device 80x25 Jan 13 20:30:22.919241 kernel: ACPI: Core revision 20230628 Jan 13 20:30:22.919248 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:30:22.919255 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:30:22.919263 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:30:22.919269 kernel: landlock: Up and running. Jan 13 20:30:22.919276 kernel: SELinux: Initializing. Jan 13 20:30:22.919283 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:30:22.919290 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:30:22.919297 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:30:22.919303 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:30:22.919310 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:30:22.919317 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:30:22.919325 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:30:22.919332 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:30:22.919351 kernel: Remapping and enabling EFI services. Jan 13 20:30:22.919358 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:30:22.919364 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:30:22.919393 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:30:22.919402 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 20:30:22.919408 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:30:22.919415 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:30:22.919422 kernel: Detected PIPT I-cache on CPU2 Jan 13 20:30:22.919431 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 20:30:22.919438 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 20:30:22.919450 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:30:22.919458 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 20:30:22.919465 kernel: Detected PIPT I-cache on CPU3 Jan 13 20:30:22.919472 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 20:30:22.919479 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 20:30:22.919487 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:30:22.919494 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 20:30:22.919502 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:30:22.919509 kernel: SMP: Total of 4 processors activated. Jan 13 20:30:22.919516 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:30:22.919524 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:30:22.919531 kernel: CPU features: detected: Common not Private translations Jan 13 20:30:22.919538 kernel: CPU features: detected: CRC32 instructions Jan 13 20:30:22.919545 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:30:22.919552 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:30:22.919561 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:30:22.919568 kernel: CPU features: detected: Privileged Access Never Jan 13 20:30:22.919575 kernel: CPU features: detected: RAS Extension Support Jan 13 20:30:22.919582 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:30:22.919589 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:30:22.919596 kernel: alternatives: applying system-wide alternatives Jan 13 20:30:22.919603 kernel: devtmpfs: initialized Jan 13 20:30:22.919610 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:30:22.919618 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:30:22.919626 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:30:22.919633 kernel: SMBIOS 3.0.0 present. Jan 13 20:30:22.919640 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 13 20:30:22.919648 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:30:22.919655 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:30:22.919662 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:30:22.919669 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:30:22.919676 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:30:22.919683 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 Jan 13 20:30:22.919692 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:30:22.919699 kernel: cpuidle: using governor menu Jan 13 20:30:22.919706 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:30:22.919713 kernel: ASID allocator initialised with 32768 entries Jan 13 20:30:22.919720 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:30:22.919727 kernel: Serial: AMBA PL011 UART driver Jan 13 20:30:22.919734 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:30:22.919741 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:30:22.919748 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:30:22.919757 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:30:22.919764 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:30:22.919771 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:30:22.919778 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:30:22.919786 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:30:22.919793 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:30:22.919800 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:30:22.919807 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:30:22.919814 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:30:22.919823 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:30:22.919830 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:30:22.919836 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:30:22.919844 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:30:22.919851 kernel: ACPI: Interpreter enabled Jan 13 20:30:22.919858 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:30:22.919865 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:30:22.919872 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:30:22.919879 kernel: printk: console [ttyAMA0] enabled Jan 13 20:30:22.919888 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:30:22.920020 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:30:22.920092 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:30:22.920157 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:30:22.920222 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:30:22.920286 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:30:22.920295 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:30:22.920305 kernel: PCI host bridge to bus 0000:00 Jan 13 20:30:22.920458 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:30:22.920533 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:30:22.920595 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:30:22.920654 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:30:22.920736 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:30:22.920815 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:30:22.920888 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 20:30:22.920953 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 20:30:22.921018 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:30:22.921083 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:30:22.921149 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 20:30:22.921216 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 20:30:22.921274 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:30:22.921335 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:30:22.921420 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:30:22.921432 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:30:22.921439 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:30:22.921447 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:30:22.921454 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:30:22.921461 kernel: iommu: Default domain type: Translated Jan 13 20:30:22.921468 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:30:22.921479 kernel: efivars: Registered efivars operations Jan 13 20:30:22.921486 kernel: vgaarb: loaded Jan 13 20:30:22.921494 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:30:22.921501 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:30:22.921509 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:30:22.921516 kernel: pnp: PnP ACPI init Jan 13 20:30:22.921592 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:30:22.921603 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:30:22.921612 kernel: NET: Registered PF_INET protocol family Jan 13 20:30:22.921620 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:30:22.921627 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:30:22.921634 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:30:22.921642 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:30:22.921649 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:30:22.921657 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:30:22.921664 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:30:22.921671 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:30:22.921680 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:30:22.921688 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:30:22.921695 kernel: kvm [1]: HYP mode not available Jan 13 20:30:22.921702 kernel: Initialise system trusted keyrings Jan 13 20:30:22.921709 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:30:22.921716 kernel: Key type asymmetric registered Jan 13 20:30:22.921725 kernel: Asymmetric key parser 'x509' registered Jan 13 20:30:22.921732 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:30:22.921739 kernel: io scheduler mq-deadline registered Jan 13 20:30:22.921748 kernel: io scheduler kyber registered Jan 13 20:30:22.921755 kernel: io scheduler bfq registered Jan 13 20:30:22.921762 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:30:22.921770 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:30:22.921777 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:30:22.921843 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 20:30:22.921853 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:30:22.921860 kernel: thunder_xcv, ver 1.0 Jan 13 20:30:22.921867 kernel: thunder_bgx, ver 1.0 Jan 13 20:30:22.921876 kernel: nicpf, ver 1.0 Jan 13 20:30:22.921884 kernel: nicvf, ver 1.0 Jan 13 20:30:22.921956 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:30:22.922019 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:30:22 UTC (1736800222) Jan 13 20:30:22.922028 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:30:22.922036 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:30:22.922043 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:30:22.922051 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:30:22.922060 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:30:22.922067 kernel: Segment Routing with IPv6 Jan 13 20:30:22.922075 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:30:22.922082 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:30:22.922089 kernel: Key type dns_resolver registered Jan 13 20:30:22.922096 kernel: registered taskstats version 1 Jan 13 20:30:22.922103 kernel: Loading compiled-in X.509 certificates Jan 13 20:30:22.922110 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:30:22.922118 kernel: Key type .fscrypt registered Jan 13 20:30:22.922126 kernel: Key type fscrypt-provisioning registered Jan 13 20:30:22.922133 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:30:22.922141 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:30:22.922148 kernel: ima: No architecture policies found Jan 13 20:30:22.922155 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:30:22.922162 kernel: clk: Disabling unused clocks Jan 13 20:30:22.922170 kernel: Freeing unused kernel memory: 39680K Jan 13 20:30:22.922179 kernel: Run /init as init process Jan 13 20:30:22.922187 kernel: with arguments: Jan 13 20:30:22.922197 kernel: /init Jan 13 20:30:22.922204 kernel: with environment: Jan 13 20:30:22.922211 kernel: HOME=/ Jan 13 20:30:22.922218 kernel: TERM=linux Jan 13 20:30:22.922225 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:30:22.922234 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:30:22.922243 systemd[1]: Detected virtualization kvm. Jan 13 20:30:22.922251 systemd[1]: Detected architecture arm64. Jan 13 20:30:22.922260 systemd[1]: Running in initrd. Jan 13 20:30:22.922267 systemd[1]: No hostname configured, using default hostname. Jan 13 20:30:22.922274 systemd[1]: Hostname set to . Jan 13 20:30:22.922282 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:30:22.922290 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:30:22.922298 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:30:22.922305 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:30:22.922314 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:30:22.922323 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:30:22.922331 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:30:22.922339 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:30:22.922348 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:30:22.922355 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:30:22.922363 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:30:22.922385 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:30:22.922397 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:30:22.922405 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:30:22.922413 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:30:22.922420 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:30:22.922428 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:30:22.922453 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:30:22.922462 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:30:22.922470 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:30:22.922480 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:30:22.922488 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:30:22.922496 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:30:22.922503 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:30:22.922511 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:30:22.922519 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:30:22.922526 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:30:22.922534 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:30:22.922541 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:30:22.922551 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:30:22.922559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:30:22.922566 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:30:22.922574 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:30:22.922582 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:30:22.922590 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:30:22.922600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:30:22.922628 systemd-journald[238]: Collecting audit messages is disabled. Jan 13 20:30:22.922649 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:30:22.922657 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:30:22.922666 systemd-journald[238]: Journal started Jan 13 20:30:22.922684 systemd-journald[238]: Runtime Journal (/run/log/journal/b359f2f1c50640429a74f50fcb7152b7) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:30:22.914042 systemd-modules-load[239]: Inserted module 'overlay' Jan 13 20:30:22.924474 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:30:22.928409 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:30:22.928549 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:30:22.930982 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 13 20:30:22.932135 kernel: Bridge firewalling registered Jan 13 20:30:22.932030 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:30:22.933205 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:30:22.937485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:30:22.950629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:30:22.951897 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:30:22.953853 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:30:22.963069 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:30:22.963974 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:30:22.966949 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:30:22.975504 dracut-cmdline[273]: dracut-dracut-053 Jan 13 20:30:22.978072 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:30:22.995545 systemd-resolved[279]: Positive Trust Anchors: Jan 13 20:30:22.995617 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:30:22.995648 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:30:23.000360 systemd-resolved[279]: Defaulting to hostname 'linux'. Jan 13 20:30:23.001411 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:30:23.003127 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:30:23.051426 kernel: SCSI subsystem initialized Jan 13 20:30:23.057398 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:30:23.064403 kernel: iscsi: registered transport (tcp) Jan 13 20:30:23.088403 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:30:23.088440 kernel: QLogic iSCSI HBA Driver Jan 13 20:30:23.132955 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:30:23.141569 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:30:23.156628 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:30:23.156701 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:30:23.157486 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:30:23.209413 kernel: raid6: neonx8 gen() 12730 MB/s Jan 13 20:30:23.226397 kernel: raid6: neonx4 gen() 14560 MB/s Jan 13 20:30:23.243399 kernel: raid6: neonx2 gen() 12792 MB/s Jan 13 20:30:23.260400 kernel: raid6: neonx1 gen() 9975 MB/s Jan 13 20:30:23.277395 kernel: raid6: int64x8 gen() 6949 MB/s Jan 13 20:30:23.294392 kernel: raid6: int64x4 gen() 7344 MB/s Jan 13 20:30:23.311395 kernel: raid6: int64x2 gen() 6130 MB/s Jan 13 20:30:23.328393 kernel: raid6: int64x1 gen() 5058 MB/s Jan 13 20:30:23.328425 kernel: raid6: using algorithm neonx4 gen() 14560 MB/s Jan 13 20:30:23.345403 kernel: raid6: .... xor() 12355 MB/s, rmw enabled Jan 13 20:30:23.345419 kernel: raid6: using neon recovery algorithm Jan 13 20:30:23.350396 kernel: xor: measuring software checksum speed Jan 13 20:30:23.350415 kernel: 8regs : 19769 MB/sec Jan 13 20:30:23.351773 kernel: 32regs : 18090 MB/sec Jan 13 20:30:23.351785 kernel: arm64_neon : 26989 MB/sec Jan 13 20:30:23.351794 kernel: xor: using function: arm64_neon (26989 MB/sec) Jan 13 20:30:23.403415 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:30:23.415567 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:30:23.431596 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:30:23.449303 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 13 20:30:23.452458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:30:23.461588 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:30:23.476084 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jan 13 20:30:23.505249 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:30:23.520607 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:30:23.576493 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:30:23.587656 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:30:23.598498 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:30:23.600094 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:30:23.601224 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:30:23.603096 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:30:23.612942 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:30:23.622901 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 20:30:23.636598 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:30:23.636704 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:30:23.636717 kernel: GPT:9289727 != 19775487 Jan 13 20:30:23.636727 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:30:23.636736 kernel: GPT:9289727 != 19775487 Jan 13 20:30:23.636746 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:30:23.636755 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:30:23.622955 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:30:23.631255 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:30:23.631398 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:30:23.636376 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:30:23.637122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:30:23.637269 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:30:23.638569 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:30:23.650705 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:30:23.660679 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (521) Jan 13 20:30:23.660729 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (508) Jan 13 20:30:23.661597 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:30:23.666483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:30:23.674732 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:30:23.681763 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:30:23.682656 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:30:23.688548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:30:23.701632 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:30:23.703218 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:30:23.707802 disk-uuid[552]: Primary Header is updated. Jan 13 20:30:23.707802 disk-uuid[552]: Secondary Entries is updated. Jan 13 20:30:23.707802 disk-uuid[552]: Secondary Header is updated. Jan 13 20:30:23.712898 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:30:23.731759 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:30:24.721638 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:30:24.721687 disk-uuid[553]: The operation has completed successfully. Jan 13 20:30:24.741757 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:30:24.741856 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:30:24.768835 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:30:24.771814 sh[573]: Success Jan 13 20:30:24.790872 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:30:24.826008 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:30:24.828034 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:30:24.828821 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:30:24.839605 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:30:24.839654 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:30:24.839665 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:30:24.840692 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:30:24.840710 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:30:24.843850 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:30:24.845002 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:30:24.845806 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:30:24.847729 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:30:24.859712 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:30:24.859767 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:30:24.859778 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:30:24.862658 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:30:24.870863 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:30:24.873446 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:30:24.878353 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:30:24.885825 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:30:24.942966 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:30:24.953290 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:30:24.981534 systemd-networkd[759]: lo: Link UP Jan 13 20:30:24.981544 systemd-networkd[759]: lo: Gained carrier Jan 13 20:30:24.982433 systemd-networkd[759]: Enumeration completed Jan 13 20:30:24.982595 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:30:24.983698 systemd[1]: Reached target network.target - Network. Jan 13 20:30:24.985769 ignition[673]: Ignition 2.20.0 Jan 13 20:30:24.985196 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:30:24.985776 ignition[673]: Stage: fetch-offline Jan 13 20:30:24.985199 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:30:24.985809 ignition[673]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:30:24.986177 systemd-networkd[759]: eth0: Link UP Jan 13 20:30:24.985817 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:30:24.986180 systemd-networkd[759]: eth0: Gained carrier Jan 13 20:30:24.985965 ignition[673]: parsed url from cmdline: "" Jan 13 20:30:24.986188 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:30:24.985968 ignition[673]: no config URL provided Jan 13 20:30:24.985972 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:30:24.985979 ignition[673]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:30:24.986006 ignition[673]: op(1): [started] loading QEMU firmware config module Jan 13 20:30:24.986010 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:30:25.000101 ignition[673]: op(1): [finished] loading QEMU firmware config module Jan 13 20:30:25.001466 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:30:25.037796 ignition[673]: parsing config with SHA512: 2095abb72e9f0054790711832d5434886f48d5c7237ef43294e5782b7f663dc4afc8b9dcb02d5d16140171c1c932b11b9f1481bc815b135bbb0de6a2cf50927a Jan 13 20:30:25.042296 unknown[673]: fetched base config from "system" Jan 13 20:30:25.042306 unknown[673]: fetched user config from "qemu" Jan 13 20:30:25.042712 ignition[673]: fetch-offline: fetch-offline passed Jan 13 20:30:25.042788 ignition[673]: Ignition finished successfully Jan 13 20:30:25.045446 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:30:25.047341 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:30:25.057911 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:30:25.068674 ignition[773]: Ignition 2.20.0 Jan 13 20:30:25.068685 ignition[773]: Stage: kargs Jan 13 20:30:25.068844 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:30:25.068854 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:30:25.072926 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:30:25.069721 ignition[773]: kargs: kargs passed Jan 13 20:30:25.069768 ignition[773]: Ignition finished successfully Jan 13 20:30:25.079542 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:30:25.090234 ignition[782]: Ignition 2.20.0 Jan 13 20:30:25.090245 ignition[782]: Stage: disks Jan 13 20:30:25.090431 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:30:25.090441 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:30:25.091262 ignition[782]: disks: disks passed Jan 13 20:30:25.092678 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:30:25.091307 ignition[782]: Ignition finished successfully Jan 13 20:30:25.093580 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:30:25.094536 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:30:25.095993 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:30:25.097058 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:30:25.098347 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:30:25.107581 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:30:25.117082 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:30:25.121630 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:30:25.129589 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:30:25.169407 kernel: EXT4-fs (vda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:30:25.169996 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:30:25.171060 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:30:25.182496 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:30:25.184019 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:30:25.185058 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:30:25.185138 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:30:25.191470 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Jan 13 20:30:25.185166 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:30:25.192009 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:30:25.195770 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:30:25.195792 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:30:25.195817 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:30:25.194612 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:30:25.198399 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:30:25.199696 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:30:25.237829 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:30:25.242707 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:30:25.245973 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:30:25.249698 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:30:25.323325 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:30:25.338543 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:30:25.339964 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:30:25.344403 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:30:25.362602 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:30:25.372493 ignition[914]: INFO : Ignition 2.20.0 Jan 13 20:30:25.372493 ignition[914]: INFO : Stage: mount Jan 13 20:30:25.373747 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:30:25.373747 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:30:25.373747 ignition[914]: INFO : mount: mount passed Jan 13 20:30:25.373747 ignition[914]: INFO : Ignition finished successfully Jan 13 20:30:25.376356 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:30:25.391530 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:30:25.838306 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:30:25.849618 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:30:25.854393 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Jan 13 20:30:25.855936 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:30:25.855951 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:30:25.855961 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:30:25.858393 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:30:25.859507 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:30:25.876653 ignition[944]: INFO : Ignition 2.20.0 Jan 13 20:30:25.876653 ignition[944]: INFO : Stage: files Jan 13 20:30:25.877881 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:30:25.877881 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:30:25.877881 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:30:25.880270 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:30:25.880270 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:30:25.880270 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:30:25.880270 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:30:25.884223 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:30:25.884223 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:30:25.884223 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:30:25.880570 unknown[944]: wrote ssh authorized keys file for user: core Jan 13 20:30:25.977177 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 20:30:26.028657 systemd-networkd[759]: eth0: Gained IPv6LL Jan 13 20:30:26.173457 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:30:26.174868 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 20:30:26.485754 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 20:30:26.774751 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:30:26.774751 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 20:30:26.777329 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:30:26.777329 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:30:26.777329 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 20:30:26.777329 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 13 20:30:26.777329 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:30:26.777329 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:30:26.777329 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 13 20:30:26.777329 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:30:26.798228 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:30:26.801781 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:30:26.803872 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:30:26.803872 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:30:26.803872 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:30:26.803872 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:30:26.803872 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:30:26.803872 ignition[944]: INFO : files: files passed Jan 13 20:30:26.803872 ignition[944]: INFO : Ignition finished successfully Jan 13 20:30:26.804259 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:30:26.814524 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:30:26.816566 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:30:26.818188 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:30:26.819005 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:30:26.824339 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:30:26.827524 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:30:26.828708 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:30:26.829824 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:30:26.831808 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:30:26.833160 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:30:26.850553 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:30:26.869806 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:30:26.869917 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:30:26.871500 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:30:26.872816 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:30:26.874220 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:30:26.875041 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:30:26.889790 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:30:26.905642 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:30:26.913657 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:30:26.914587 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:30:26.916060 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:30:26.917310 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:30:26.917452 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:30:26.919428 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:30:26.920880 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:30:26.922180 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:30:26.923414 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:30:26.924868 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:30:26.926247 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:30:26.927565 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:30:26.928975 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:30:26.930347 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:30:26.931600 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:30:26.932673 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:30:26.932800 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:30:26.934461 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:30:26.935832 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:30:26.937302 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:30:26.940435 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:30:26.941447 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:30:26.941561 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:30:26.943785 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:30:26.943905 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:30:26.945315 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:30:26.946437 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:30:26.950432 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:30:26.951342 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:30:26.953003 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:30:26.954126 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:30:26.954210 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:30:26.955287 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:30:26.955375 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:30:26.956488 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:30:26.956595 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:30:26.957880 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:30:26.957981 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:30:26.969596 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:30:26.971031 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:30:26.971684 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:30:26.971811 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:30:26.973123 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:30:26.973221 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:30:26.978548 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:30:26.978649 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:30:26.984112 ignition[1000]: INFO : Ignition 2.20.0 Jan 13 20:30:26.984112 ignition[1000]: INFO : Stage: umount Jan 13 20:30:26.985459 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:30:26.985459 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:30:26.985459 ignition[1000]: INFO : umount: umount passed Jan 13 20:30:26.985459 ignition[1000]: INFO : Ignition finished successfully Jan 13 20:30:26.986827 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:30:26.986937 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:30:26.987947 systemd[1]: Stopped target network.target - Network. Jan 13 20:30:26.989838 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:30:26.989895 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:30:26.991239 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:30:26.991282 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:30:26.992735 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:30:26.992776 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:30:26.994081 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:30:26.994121 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:30:26.995573 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:30:26.997299 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:30:26.999528 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:30:27.006113 systemd-networkd[759]: eth0: DHCPv6 lease lost Jan 13 20:30:27.007897 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:30:27.008008 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:30:27.010259 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:30:27.010419 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:30:27.012597 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:30:27.012660 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:30:27.021554 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:30:27.022233 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:30:27.022286 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:30:27.023810 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:30:27.023849 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:30:27.025128 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:30:27.025165 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:30:27.026715 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:30:27.026756 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:30:27.029248 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:30:27.037582 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:30:27.037759 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:30:27.053110 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:30:27.053251 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:30:27.054829 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:30:27.054893 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:30:27.055894 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:30:27.055924 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:30:27.057142 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:30:27.057182 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:30:27.059228 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:30:27.059265 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:30:27.068320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:30:27.068375 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:30:27.078549 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:30:27.079299 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:30:27.079351 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:30:27.081108 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:30:27.081151 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:30:27.082559 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:30:27.082601 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:30:27.084254 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:30:27.084296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:30:27.086208 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:30:27.086300 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:30:27.087875 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:30:27.087970 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:30:27.089685 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:30:27.090725 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:30:27.090794 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:30:27.093035 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:30:27.102832 systemd[1]: Switching root. Jan 13 20:30:27.130543 systemd-journald[238]: Journal stopped Jan 13 20:30:27.824239 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 13 20:30:27.824303 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:30:27.824320 kernel: SELinux: policy capability open_perms=1 Jan 13 20:30:27.824336 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:30:27.824351 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:30:27.824371 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:30:27.824456 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:30:27.824473 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:30:27.824486 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:30:27.824496 kernel: audit: type=1403 audit(1736800227.280:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:30:27.824507 systemd[1]: Successfully loaded SELinux policy in 37.654ms. Jan 13 20:30:27.824525 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.360ms. Jan 13 20:30:27.824538 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:30:27.824550 systemd[1]: Detected virtualization kvm. Jan 13 20:30:27.824561 systemd[1]: Detected architecture arm64. Jan 13 20:30:27.824572 systemd[1]: Detected first boot. Jan 13 20:30:27.824585 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:30:27.824601 zram_generator::config[1044]: No configuration found. Jan 13 20:30:27.824614 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:30:27.824626 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:30:27.824636 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:30:27.824648 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:30:27.824660 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:30:27.824671 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:30:27.824683 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:30:27.824694 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:30:27.824706 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:30:27.824717 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:30:27.824729 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:30:27.824741 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:30:27.824752 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:30:27.824763 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:30:27.824775 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:30:27.824787 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:30:27.824798 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:30:27.824809 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:30:27.824820 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:30:27.824831 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:30:27.824841 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:30:27.824852 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:30:27.824864 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:30:27.824876 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:30:27.824887 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:30:27.824898 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:30:27.824909 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:30:27.824924 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:30:27.824935 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:30:27.824946 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:30:27.824957 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:30:27.824970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:30:27.824982 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:30:27.824993 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:30:27.825003 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:30:27.825014 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:30:27.825025 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:30:27.825035 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:30:27.825046 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:30:27.825057 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:30:27.825070 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:30:27.825081 systemd[1]: Reached target machines.target - Containers. Jan 13 20:30:27.825092 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:30:27.825103 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:30:27.825114 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:30:27.825125 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:30:27.825136 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:30:27.825147 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:30:27.825158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:30:27.825170 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:30:27.825181 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:30:27.825192 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:30:27.825203 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:30:27.825215 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:30:27.825226 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:30:27.825237 kernel: fuse: init (API version 7.39) Jan 13 20:30:27.825247 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:30:27.825259 kernel: loop: module loaded Jan 13 20:30:27.825270 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:30:27.825281 kernel: ACPI: bus type drm_connector registered Jan 13 20:30:27.825291 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:30:27.825302 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:30:27.825313 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:30:27.825324 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:30:27.825335 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:30:27.825346 systemd[1]: Stopped verity-setup.service. Jan 13 20:30:27.825359 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:30:27.825377 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:30:27.825437 systemd-journald[1112]: Collecting audit messages is disabled. Jan 13 20:30:27.825460 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:30:27.825471 systemd-journald[1112]: Journal started Jan 13 20:30:27.825496 systemd-journald[1112]: Runtime Journal (/run/log/journal/b359f2f1c50640429a74f50fcb7152b7) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:30:27.640411 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:30:27.658430 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:30:27.658798 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:30:27.826945 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:30:27.828847 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:30:27.829498 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:30:27.830447 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:30:27.833411 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:30:27.834528 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:30:27.835708 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:30:27.835851 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:30:27.837036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:30:27.837185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:30:27.838295 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:30:27.838459 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:30:27.839521 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:30:27.839662 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:30:27.840953 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:30:27.841092 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:30:27.842179 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:30:27.842327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:30:27.843449 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:30:27.844570 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:30:27.845953 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:30:27.860025 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:30:27.872516 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:30:27.874521 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:30:27.875320 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:30:27.875369 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:30:27.877053 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:30:27.879195 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:30:27.881595 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:30:27.882479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:30:27.884131 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:30:27.886120 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:30:27.887056 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:30:27.890591 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:30:27.891669 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:30:27.895607 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:30:27.899981 systemd-journald[1112]: Time spent on flushing to /var/log/journal/b359f2f1c50640429a74f50fcb7152b7 is 12.164ms for 855 entries. Jan 13 20:30:27.899981 systemd-journald[1112]: System Journal (/var/log/journal/b359f2f1c50640429a74f50fcb7152b7) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:30:27.971593 systemd-journald[1112]: Received client request to flush runtime journal. Jan 13 20:30:27.971646 kernel: loop0: detected capacity change from 0 to 113536 Jan 13 20:30:27.971665 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:30:27.900427 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:30:27.905187 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:30:27.907707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:30:27.909110 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:30:27.910977 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:30:27.913522 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:30:27.926621 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:30:27.934501 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:30:27.941981 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 13 20:30:27.941991 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 13 20:30:27.943660 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:30:27.945746 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:30:27.953860 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:30:27.957096 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:30:27.958899 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:30:27.961655 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:30:27.976937 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:30:27.982402 kernel: loop1: detected capacity change from 0 to 194512 Jan 13 20:30:27.982840 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:30:27.991683 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:30:27.993233 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:30:27.995400 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:30:28.029554 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 13 20:30:28.029573 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 13 20:30:28.033734 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:30:28.039490 kernel: loop2: detected capacity change from 0 to 116808 Jan 13 20:30:28.075432 kernel: loop3: detected capacity change from 0 to 113536 Jan 13 20:30:28.080407 kernel: loop4: detected capacity change from 0 to 194512 Jan 13 20:30:28.085477 kernel: loop5: detected capacity change from 0 to 116808 Jan 13 20:30:28.088830 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:30:28.089211 (sd-merge)[1184]: Merged extensions into '/usr'. Jan 13 20:30:28.095354 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:30:28.095536 systemd[1]: Reloading... Jan 13 20:30:28.145424 zram_generator::config[1209]: No configuration found. Jan 13 20:30:28.199545 ldconfig[1151]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:30:28.242306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:30:28.277202 systemd[1]: Reloading finished in 178 ms. Jan 13 20:30:28.305883 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:30:28.307084 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:30:28.319551 systemd[1]: Starting ensure-sysext.service... Jan 13 20:30:28.321296 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:30:28.330310 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:30:28.330327 systemd[1]: Reloading... Jan 13 20:30:28.338982 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:30:28.339588 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:30:28.340323 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:30:28.340686 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 13 20:30:28.340808 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 13 20:30:28.342878 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:30:28.342947 systemd-tmpfiles[1245]: Skipping /boot Jan 13 20:30:28.350042 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:30:28.350140 systemd-tmpfiles[1245]: Skipping /boot Jan 13 20:30:28.384406 zram_generator::config[1275]: No configuration found. Jan 13 20:30:28.466534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:30:28.501915 systemd[1]: Reloading finished in 171 ms. Jan 13 20:30:28.516430 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:30:28.528843 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:30:28.537768 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:30:28.540189 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:30:28.542431 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:30:28.550778 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:30:28.554233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:30:28.559724 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:30:28.564477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:30:28.565719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:30:28.572095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:30:28.576722 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:30:28.579957 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:30:28.580927 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:30:28.582822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:30:28.582966 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:30:28.584621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:30:28.585446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:30:28.592156 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:30:28.597501 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:30:28.600706 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:30:28.600781 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Jan 13 20:30:28.602697 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:30:28.605859 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:30:28.606016 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:30:28.617421 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:30:28.619232 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:30:28.622465 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:30:28.631654 systemd[1]: Finished ensure-sysext.service. Jan 13 20:30:28.636459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:30:28.646700 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:30:28.649501 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:30:28.652579 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:30:28.656609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:30:28.657481 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:30:28.662543 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:30:28.666706 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:30:28.667843 augenrules[1370]: No rules Jan 13 20:30:28.669594 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1340) Jan 13 20:30:28.668851 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:30:28.669249 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:30:28.671434 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:30:28.671597 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:30:28.673696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:30:28.673835 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:30:28.677895 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:30:28.678045 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:30:28.684289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:30:28.684530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:30:28.686864 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:30:28.689434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:30:28.704820 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:30:28.706951 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:30:28.707017 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:30:28.746973 systemd-resolved[1312]: Positive Trust Anchors: Jan 13 20:30:28.747049 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:30:28.747080 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:30:28.756034 systemd-resolved[1312]: Defaulting to hostname 'linux'. Jan 13 20:30:28.756073 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:30:28.757761 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:30:28.760574 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:30:28.761947 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:30:28.768572 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:30:28.773363 systemd-networkd[1368]: lo: Link UP Jan 13 20:30:28.773373 systemd-networkd[1368]: lo: Gained carrier Jan 13 20:30:28.776092 systemd-networkd[1368]: Enumeration completed Jan 13 20:30:28.777372 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:30:28.778732 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:30:28.779826 systemd[1]: Reached target network.target - Network. Jan 13 20:30:28.780665 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:30:28.780674 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:30:28.783722 systemd-networkd[1368]: eth0: Link UP Jan 13 20:30:28.783730 systemd-networkd[1368]: eth0: Gained carrier Jan 13 20:30:28.783747 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:30:28.783867 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:30:28.795449 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:30:28.804662 systemd-networkd[1368]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:30:28.805506 systemd-timesyncd[1375]: Network configuration changed, trying to establish connection. Jan 13 20:30:28.806198 systemd-timesyncd[1375]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:30:28.806250 systemd-timesyncd[1375]: Initial clock synchronization to Mon 2025-01-13 20:30:28.917294 UTC. Jan 13 20:30:28.821702 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:30:28.832963 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:30:28.835460 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:30:28.858477 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:30:28.861725 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:30:28.894883 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:30:28.896049 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:30:28.898492 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:30:28.899309 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:30:28.900228 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:30:28.901368 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:30:28.902244 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:30:28.903179 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:30:28.904073 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:30:28.904106 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:30:28.904930 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:30:28.906438 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:30:28.908578 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:30:28.920582 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:30:28.922856 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:30:28.924149 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:30:28.925073 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:30:28.925793 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:30:28.926526 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:30:28.926558 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:30:28.927624 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:30:28.929569 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:30:28.932521 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:30:28.933533 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:30:28.938691 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:30:28.939474 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:30:28.940532 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:30:28.945799 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:30:28.948615 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:30:28.952980 jq[1414]: false Jan 13 20:30:28.954045 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:30:28.963554 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:30:28.969988 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:30:28.970495 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:30:28.974538 dbus-daemon[1413]: [system] SELinux support is enabled Jan 13 20:30:28.974566 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:30:28.976345 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:30:28.977723 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:30:28.981977 extend-filesystems[1415]: Found loop3 Jan 13 20:30:28.984448 extend-filesystems[1415]: Found loop4 Jan 13 20:30:28.984448 extend-filesystems[1415]: Found loop5 Jan 13 20:30:28.984448 extend-filesystems[1415]: Found vda Jan 13 20:30:28.984448 extend-filesystems[1415]: Found vda1 Jan 13 20:30:28.984448 extend-filesystems[1415]: Found vda2 Jan 13 20:30:28.984448 extend-filesystems[1415]: Found vda3 Jan 13 20:30:28.984448 extend-filesystems[1415]: Found usr Jan 13 20:30:28.984448 extend-filesystems[1415]: Found vda4 Jan 13 20:30:28.984448 extend-filesystems[1415]: Found vda6 Jan 13 20:30:28.984448 extend-filesystems[1415]: Found vda7 Jan 13 20:30:28.984448 extend-filesystems[1415]: Found vda9 Jan 13 20:30:28.984448 extend-filesystems[1415]: Checking size of /dev/vda9 Jan 13 20:30:28.984370 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:30:28.998574 jq[1431]: true Jan 13 20:30:28.989689 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:30:28.989848 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:30:28.990096 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:30:28.990226 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:30:28.991886 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:30:28.992039 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:30:29.001857 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:30:29.001915 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:30:29.005545 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:30:29.005575 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:30:29.011993 jq[1436]: true Jan 13 20:30:29.020083 (ntainerd)[1444]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:30:29.020350 update_engine[1430]: I20250113 20:30:29.019979 1430 main.cc:92] Flatcar Update Engine starting Jan 13 20:30:29.023549 tar[1434]: linux-arm64/helm Jan 13 20:30:29.027127 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:30:29.030213 extend-filesystems[1415]: Resized partition /dev/vda9 Jan 13 20:30:29.031721 update_engine[1430]: I20250113 20:30:29.031653 1430 update_check_scheduler.cc:74] Next update check in 6m36s Jan 13 20:30:29.032640 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:30:29.040008 extend-filesystems[1452]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:30:29.046409 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1352) Jan 13 20:30:29.046503 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:30:29.047383 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:30:29.047655 systemd-logind[1423]: New seat seat0. Jan 13 20:30:29.060262 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:30:29.081415 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:30:29.099171 extend-filesystems[1452]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:30:29.099171 extend-filesystems[1452]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:30:29.099171 extend-filesystems[1452]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:30:29.104903 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Jan 13 20:30:29.100015 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:30:29.102455 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:30:29.111083 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:30:29.116211 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:30:29.117860 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:30:29.135687 locksmithd[1451]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:30:29.234423 containerd[1444]: time="2025-01-13T20:30:29.233700534Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:30:29.263273 containerd[1444]: time="2025-01-13T20:30:29.263140689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:30:29.265525 containerd[1444]: time="2025-01-13T20:30:29.265487000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:30:29.265616 containerd[1444]: time="2025-01-13T20:30:29.265601793Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:30:29.265672 containerd[1444]: time="2025-01-13T20:30:29.265659390Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:30:29.265903 containerd[1444]: time="2025-01-13T20:30:29.265881402Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:30:29.265972 containerd[1444]: time="2025-01-13T20:30:29.265958172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:30:29.266090 containerd[1444]: time="2025-01-13T20:30:29.266071514Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:30:29.266150 containerd[1444]: time="2025-01-13T20:30:29.266136724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:30:29.266427 containerd[1444]: time="2025-01-13T20:30:29.266383829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:30:29.266514 containerd[1444]: time="2025-01-13T20:30:29.266498943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:30:29.266573 containerd[1444]: time="2025-01-13T20:30:29.266559964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:30:29.266620 containerd[1444]: time="2025-01-13T20:30:29.266608217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:30:29.266780 containerd[1444]: time="2025-01-13T20:30:29.266760307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:30:29.267073 containerd[1444]: time="2025-01-13T20:30:29.267048939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:30:29.267262 containerd[1444]: time="2025-01-13T20:30:29.267238084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:30:29.267339 containerd[1444]: time="2025-01-13T20:30:29.267324520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:30:29.267491 containerd[1444]: time="2025-01-13T20:30:29.267472300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:30:29.267598 containerd[1444]: time="2025-01-13T20:30:29.267581776Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:30:29.271617 containerd[1444]: time="2025-01-13T20:30:29.271589713Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.271726497Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.271748529Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.271767298Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.271782523Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.271940816Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.272263643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.272367440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.272384034Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.272430636Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.272445095Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.272458146Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.272470632Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.272483762Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:30:29.274225 containerd[1444]: time="2025-01-13T20:30:29.272497860Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272510507Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272521704Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272534754Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272555900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272570239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272588485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272600609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272612934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272626306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272637543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272649425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272662153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272675848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274560 containerd[1444]: time="2025-01-13T20:30:29.272688656Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.272700740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.272712984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.272732156Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.272753705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.272766312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.272778275Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.272940393Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.272958156Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.272968749Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.272983289Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.272991909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.273003549Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.273013941Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:30:29.274809 containerd[1444]: time="2025-01-13T20:30:29.273028562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:30:29.275060 containerd[1444]: time="2025-01-13T20:30:29.273381558Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:30:29.275060 containerd[1444]: time="2025-01-13T20:30:29.273445640Z" level=info msg="Connect containerd service" Jan 13 20:30:29.275060 containerd[1444]: time="2025-01-13T20:30:29.273480641Z" level=info msg="using legacy CRI server" Jan 13 20:30:29.275060 containerd[1444]: time="2025-01-13T20:30:29.273487811Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:30:29.275060 containerd[1444]: time="2025-01-13T20:30:29.273713851Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:30:29.275549 containerd[1444]: time="2025-01-13T20:30:29.275515564Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:30:29.275812 containerd[1444]: time="2025-01-13T20:30:29.275783452Z" level=info msg="Start subscribing containerd event" Jan 13 20:30:29.275920 containerd[1444]: time="2025-01-13T20:30:29.275903843Z" level=info msg="Start recovering state" Jan 13 20:30:29.276030 containerd[1444]: time="2025-01-13T20:30:29.276015453Z" level=info msg="Start event monitor" Jan 13 20:30:29.276092 containerd[1444]: time="2025-01-13T20:30:29.276079012Z" level=info msg="Start snapshots syncer" Jan 13 20:30:29.276146 containerd[1444]: time="2025-01-13T20:30:29.276134958Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:30:29.276201 containerd[1444]: time="2025-01-13T20:30:29.276189977Z" level=info msg="Start streaming server" Jan 13 20:30:29.276908 containerd[1444]: time="2025-01-13T20:30:29.276884772Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:30:29.277050 containerd[1444]: time="2025-01-13T20:30:29.277034203Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:30:29.277280 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:30:29.282395 containerd[1444]: time="2025-01-13T20:30:29.280112525Z" level=info msg="containerd successfully booted in 0.047503s" Jan 13 20:30:29.391404 tar[1434]: linux-arm64/LICENSE Jan 13 20:30:29.391525 tar[1434]: linux-arm64/README.md Jan 13 20:30:29.402072 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:30:29.522781 sshd_keygen[1429]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:30:29.541311 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:30:29.556979 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:30:29.562268 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:30:29.562561 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:30:29.565716 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:30:29.580479 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:30:29.591729 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:30:29.593733 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:30:29.594788 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:30:30.380662 systemd-networkd[1368]: eth0: Gained IPv6LL Jan 13 20:30:30.384477 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:30:30.385960 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:30:30.396676 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:30:30.398866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:30:30.400916 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:30:30.417957 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:30:30.419467 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:30:30.423198 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:30:30.423854 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:30:30.903327 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:30.904640 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:30:30.907612 systemd[1]: Startup finished in 599ms (kernel) + 4.547s (initrd) + 3.666s (userspace) = 8.814s. Jan 13 20:30:30.907626 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:30:31.455364 kubelet[1526]: E0113 20:30:31.455265 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:30:31.457974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:30:31.458119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:30:35.269773 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:30:35.271101 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:32984.service - OpenSSH per-connection server daemon (10.0.0.1:32984). Jan 13 20:30:35.349913 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 32984 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:30:35.353059 sshd-session[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:35.361439 systemd-logind[1423]: New session 1 of user core. Jan 13 20:30:35.362490 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:30:35.380710 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:30:35.390655 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:30:35.393268 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:30:35.400430 (systemd)[1545]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:30:35.471989 systemd[1545]: Queued start job for default target default.target. Jan 13 20:30:35.481332 systemd[1545]: Created slice app.slice - User Application Slice. Jan 13 20:30:35.481378 systemd[1545]: Reached target paths.target - Paths. Jan 13 20:30:35.481409 systemd[1545]: Reached target timers.target - Timers. Jan 13 20:30:35.482684 systemd[1545]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:30:35.492545 systemd[1545]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:30:35.492613 systemd[1545]: Reached target sockets.target - Sockets. Jan 13 20:30:35.492626 systemd[1545]: Reached target basic.target - Basic System. Jan 13 20:30:35.492662 systemd[1545]: Reached target default.target - Main User Target. Jan 13 20:30:35.492690 systemd[1545]: Startup finished in 86ms. Jan 13 20:30:35.493031 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:30:35.494483 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:30:35.554117 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:32988.service - OpenSSH per-connection server daemon (10.0.0.1:32988). Jan 13 20:30:35.595342 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 32988 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:30:35.596720 sshd-session[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:35.600621 systemd-logind[1423]: New session 2 of user core. Jan 13 20:30:35.611585 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:30:35.663426 sshd[1558]: Connection closed by 10.0.0.1 port 32988 Jan 13 20:30:35.664274 sshd-session[1556]: pam_unix(sshd:session): session closed for user core Jan 13 20:30:35.676965 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:32988.service: Deactivated successfully. Jan 13 20:30:35.678591 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:30:35.680598 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:30:35.681822 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:32990.service - OpenSSH per-connection server daemon (10.0.0.1:32990). Jan 13 20:30:35.683597 systemd-logind[1423]: Removed session 2. Jan 13 20:30:35.723133 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 32990 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:30:35.724531 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:35.729068 systemd-logind[1423]: New session 3 of user core. Jan 13 20:30:35.735528 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:30:35.782968 sshd[1565]: Connection closed by 10.0.0.1 port 32990 Jan 13 20:30:35.783502 sshd-session[1563]: pam_unix(sshd:session): session closed for user core Jan 13 20:30:35.793809 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:32990.service: Deactivated successfully. Jan 13 20:30:35.795177 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:30:35.796416 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:30:35.797583 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:32994.service - OpenSSH per-connection server daemon (10.0.0.1:32994). Jan 13 20:30:35.798422 systemd-logind[1423]: Removed session 3. Jan 13 20:30:35.838717 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 32994 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:30:35.840304 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:35.844481 systemd-logind[1423]: New session 4 of user core. Jan 13 20:30:35.858582 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:30:35.912430 sshd[1572]: Connection closed by 10.0.0.1 port 32994 Jan 13 20:30:35.912981 sshd-session[1570]: pam_unix(sshd:session): session closed for user core Jan 13 20:30:35.929215 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:32994.service: Deactivated successfully. Jan 13 20:30:35.931032 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:30:35.936605 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:30:35.948802 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:33008.service - OpenSSH per-connection server daemon (10.0.0.1:33008). Jan 13 20:30:35.949786 systemd-logind[1423]: Removed session 4. Jan 13 20:30:35.988194 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 33008 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:30:35.989680 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:35.996106 systemd-logind[1423]: New session 5 of user core. Jan 13 20:30:36.005614 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:30:36.066371 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:30:36.070459 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:30:36.083464 sudo[1580]: pam_unix(sudo:session): session closed for user root Jan 13 20:30:36.085690 sshd[1579]: Connection closed by 10.0.0.1 port 33008 Jan 13 20:30:36.086872 sshd-session[1577]: pam_unix(sshd:session): session closed for user core Jan 13 20:30:36.097308 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:33008.service: Deactivated successfully. Jan 13 20:30:36.099487 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:30:36.100823 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:30:36.112673 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:33020.service - OpenSSH per-connection server daemon (10.0.0.1:33020). Jan 13 20:30:36.113543 systemd-logind[1423]: Removed session 5. Jan 13 20:30:36.157658 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 33020 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:30:36.159330 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:36.163912 systemd-logind[1423]: New session 6 of user core. Jan 13 20:30:36.171565 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:30:36.223599 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:30:36.223884 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:30:36.227400 sudo[1589]: pam_unix(sudo:session): session closed for user root Jan 13 20:30:36.232073 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:30:36.232641 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:30:36.252832 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:30:36.281882 augenrules[1611]: No rules Jan 13 20:30:36.283169 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:30:36.283369 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:30:36.284635 sudo[1588]: pam_unix(sudo:session): session closed for user root Jan 13 20:30:36.286424 sshd[1587]: Connection closed by 10.0.0.1 port 33020 Jan 13 20:30:36.286567 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Jan 13 20:30:36.300005 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:33020.service: Deactivated successfully. Jan 13 20:30:36.301558 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:30:36.303657 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:30:36.305533 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:33026.service - OpenSSH per-connection server daemon (10.0.0.1:33026). Jan 13 20:30:36.306695 systemd-logind[1423]: Removed session 6. Jan 13 20:30:36.350357 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 33026 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:30:36.349119 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:30:36.355180 systemd-logind[1423]: New session 7 of user core. Jan 13 20:30:36.365637 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:30:36.416980 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:30:36.417267 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:30:36.780707 (dockerd)[1642]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:30:36.780843 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:30:37.036907 dockerd[1642]: time="2025-01-13T20:30:37.036622961Z" level=info msg="Starting up" Jan 13 20:30:37.302010 dockerd[1642]: time="2025-01-13T20:30:37.301890750Z" level=info msg="Loading containers: start." Jan 13 20:30:37.455421 kernel: Initializing XFRM netlink socket Jan 13 20:30:37.532672 systemd-networkd[1368]: docker0: Link UP Jan 13 20:30:37.569028 dockerd[1642]: time="2025-01-13T20:30:37.568785016Z" level=info msg="Loading containers: done." Jan 13 20:30:37.584318 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck485921101-merged.mount: Deactivated successfully. Jan 13 20:30:37.586078 dockerd[1642]: time="2025-01-13T20:30:37.586025525Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:30:37.586160 dockerd[1642]: time="2025-01-13T20:30:37.586142247Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:30:37.586285 dockerd[1642]: time="2025-01-13T20:30:37.586255796Z" level=info msg="Daemon has completed initialization" Jan 13 20:30:37.618056 dockerd[1642]: time="2025-01-13T20:30:37.617999513Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:30:37.618437 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:30:38.391254 containerd[1444]: time="2025-01-13T20:30:38.391210459Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:30:39.142100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984634022.mount: Deactivated successfully. Jan 13 20:30:40.474075 containerd[1444]: time="2025-01-13T20:30:40.473997822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:40.474501 containerd[1444]: time="2025-01-13T20:30:40.474443901Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201252" Jan 13 20:30:40.475436 containerd[1444]: time="2025-01-13T20:30:40.475372824Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:40.478349 containerd[1444]: time="2025-01-13T20:30:40.478309984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:40.479708 containerd[1444]: time="2025-01-13T20:30:40.479666523Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.08841234s" Jan 13 20:30:40.479708 containerd[1444]: time="2025-01-13T20:30:40.479707222Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 20:30:40.501040 containerd[1444]: time="2025-01-13T20:30:40.500991933Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:30:41.708499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:30:41.720697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:30:41.808997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:41.812904 (kubelet)[1921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:30:41.860332 kubelet[1921]: E0113 20:30:41.860075 1921 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:30:41.863142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:30:41.863278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:30:42.242436 containerd[1444]: time="2025-01-13T20:30:42.241425542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:42.242759 containerd[1444]: time="2025-01-13T20:30:42.242441383Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381299" Jan 13 20:30:42.242869 containerd[1444]: time="2025-01-13T20:30:42.242827980Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:42.245694 containerd[1444]: time="2025-01-13T20:30:42.245655325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:42.247981 containerd[1444]: time="2025-01-13T20:30:42.247924719Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.746890405s" Jan 13 20:30:42.247981 containerd[1444]: time="2025-01-13T20:30:42.247968730Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 20:30:42.267310 containerd[1444]: time="2025-01-13T20:30:42.267266584Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:30:43.170096 containerd[1444]: time="2025-01-13T20:30:43.170031994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:43.170810 containerd[1444]: time="2025-01-13T20:30:43.170771108Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765642" Jan 13 20:30:43.171395 containerd[1444]: time="2025-01-13T20:30:43.171351698Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:43.174523 containerd[1444]: time="2025-01-13T20:30:43.174490349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:43.175668 containerd[1444]: time="2025-01-13T20:30:43.175584300Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 908.270899ms" Jan 13 20:30:43.175668 containerd[1444]: time="2025-01-13T20:30:43.175620842Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 20:30:43.195358 containerd[1444]: time="2025-01-13T20:30:43.195298642Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:30:44.219852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408686767.mount: Deactivated successfully. Jan 13 20:30:44.851410 containerd[1444]: time="2025-01-13T20:30:44.851342741Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:44.851999 containerd[1444]: time="2025-01-13T20:30:44.851939351Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273979" Jan 13 20:30:44.852903 containerd[1444]: time="2025-01-13T20:30:44.852875816Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:44.854789 containerd[1444]: time="2025-01-13T20:30:44.854716421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:44.855627 containerd[1444]: time="2025-01-13T20:30:44.855475299Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.660133177s" Jan 13 20:30:44.855627 containerd[1444]: time="2025-01-13T20:30:44.855512597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 20:30:44.874573 containerd[1444]: time="2025-01-13T20:30:44.874535750Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:30:45.476533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923613066.mount: Deactivated successfully. Jan 13 20:30:46.287470 containerd[1444]: time="2025-01-13T20:30:46.287408495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:46.288407 containerd[1444]: time="2025-01-13T20:30:46.288357531Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 13 20:30:46.289505 containerd[1444]: time="2025-01-13T20:30:46.289467219Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:46.294412 containerd[1444]: time="2025-01-13T20:30:46.292973894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:46.294412 containerd[1444]: time="2025-01-13T20:30:46.294183975Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.419600983s" Jan 13 20:30:46.294412 containerd[1444]: time="2025-01-13T20:30:46.294209554Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:30:46.312929 containerd[1444]: time="2025-01-13T20:30:46.312891347Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:30:46.859774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2285207311.mount: Deactivated successfully. Jan 13 20:30:46.863916 containerd[1444]: time="2025-01-13T20:30:46.863860518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:46.864695 containerd[1444]: time="2025-01-13T20:30:46.864645174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 13 20:30:46.865251 containerd[1444]: time="2025-01-13T20:30:46.865218782Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:46.867693 containerd[1444]: time="2025-01-13T20:30:46.867655581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:46.868594 containerd[1444]: time="2025-01-13T20:30:46.868559352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 555.475199ms" Jan 13 20:30:46.868594 containerd[1444]: time="2025-01-13T20:30:46.868592028Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:30:46.888546 containerd[1444]: time="2025-01-13T20:30:46.888502304Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:30:47.481971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3813592782.mount: Deactivated successfully. Jan 13 20:30:48.962320 containerd[1444]: time="2025-01-13T20:30:48.962267863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:48.963342 containerd[1444]: time="2025-01-13T20:30:48.963057710Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jan 13 20:30:48.964051 containerd[1444]: time="2025-01-13T20:30:48.963975216Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:48.967684 containerd[1444]: time="2025-01-13T20:30:48.967021735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:30:48.968510 containerd[1444]: time="2025-01-13T20:30:48.968421703Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.079878868s" Jan 13 20:30:48.968510 containerd[1444]: time="2025-01-13T20:30:48.968458538Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 20:30:52.002226 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:30:52.011561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:30:52.110947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:52.114796 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:30:52.159524 kubelet[2150]: E0113 20:30:52.159463 2150 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:30:52.161784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:30:52.161907 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:30:53.435647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:53.445832 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:30:53.461049 systemd[1]: Reloading requested from client PID 2165 ('systemctl') (unit session-7.scope)... Jan 13 20:30:53.461066 systemd[1]: Reloading... Jan 13 20:30:53.530503 zram_generator::config[2204]: No configuration found. Jan 13 20:30:53.683901 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:30:53.736744 systemd[1]: Reloading finished in 275 ms. Jan 13 20:30:53.786760 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:30:53.786874 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:30:53.788426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:53.790100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:30:53.885951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:30:53.890258 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:30:53.929569 kubelet[2250]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:30:53.929569 kubelet[2250]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:30:53.929569 kubelet[2250]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:30:53.929944 kubelet[2250]: I0113 20:30:53.929607 2250 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:30:54.679975 kubelet[2250]: I0113 20:30:54.679931 2250 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:30:54.680249 kubelet[2250]: I0113 20:30:54.680018 2250 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:30:54.682469 kubelet[2250]: I0113 20:30:54.682369 2250 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:30:54.702632 kubelet[2250]: I0113 20:30:54.701879 2250 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:30:54.704955 kubelet[2250]: E0113 20:30:54.704934 2250 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:54.712102 kubelet[2250]: I0113 20:30:54.712058 2250 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:30:54.713050 kubelet[2250]: I0113 20:30:54.713006 2250 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:30:54.713250 kubelet[2250]: I0113 20:30:54.713225 2250 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:30:54.713250 kubelet[2250]: I0113 20:30:54.713251 2250 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:30:54.713395 kubelet[2250]: I0113 20:30:54.713262 2250 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:30:54.713438 kubelet[2250]: I0113 20:30:54.713421 2250 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:30:54.715750 kubelet[2250]: I0113 20:30:54.715719 2250 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:30:54.719467 kubelet[2250]: I0113 20:30:54.715756 2250 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:30:54.719467 kubelet[2250]: I0113 20:30:54.716229 2250 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:30:54.719467 kubelet[2250]: I0113 20:30:54.716249 2250 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:30:54.719467 kubelet[2250]: W0113 20:30:54.716299 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:54.719467 kubelet[2250]: E0113 20:30:54.716365 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:54.719961 kubelet[2250]: I0113 20:30:54.719910 2250 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:30:54.720491 kubelet[2250]: I0113 20:30:54.720467 2250 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:30:54.720541 kubelet[2250]: W0113 20:30:54.720484 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:54.720541 kubelet[2250]: E0113 20:30:54.720533 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.144:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:54.721056 kubelet[2250]: W0113 20:30:54.721020 2250 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:30:54.722159 kubelet[2250]: I0113 20:30:54.722002 2250 server.go:1256] "Started kubelet" Jan 13 20:30:54.722159 kubelet[2250]: I0113 20:30:54.722126 2250 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:30:54.722474 kubelet[2250]: I0113 20:30:54.722452 2250 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:30:54.722520 kubelet[2250]: I0113 20:30:54.722515 2250 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:30:54.723863 kubelet[2250]: I0113 20:30:54.723830 2250 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:30:54.724673 kubelet[2250]: I0113 20:30:54.724141 2250 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:30:54.725453 kubelet[2250]: I0113 20:30:54.725353 2250 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:30:54.725527 kubelet[2250]: I0113 20:30:54.725459 2250 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:30:54.725527 kubelet[2250]: I0113 20:30:54.725516 2250 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:30:54.725827 kubelet[2250]: W0113 20:30:54.725784 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:54.725827 kubelet[2250]: E0113 20:30:54.725825 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:54.726156 kubelet[2250]: E0113 20:30:54.726099 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Jan 13 20:30:54.727170 kubelet[2250]: E0113 20:30:54.727109 2250 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:30:54.727363 kubelet[2250]: I0113 20:30:54.727344 2250 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:30:54.731060 kubelet[2250]: I0113 20:30:54.731035 2250 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:30:54.731247 kubelet[2250]: I0113 20:30:54.731185 2250 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:30:54.732822 kubelet[2250]: E0113 20:30:54.731438 2250 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5aa13482cdbb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:30:54.721969595 +0000 UTC m=+0.828324037,LastTimestamp:2025-01-13 20:30:54.721969595 +0000 UTC m=+0.828324037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 20:30:54.742456 kubelet[2250]: I0113 20:30:54.742425 2250 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:30:54.744014 kubelet[2250]: I0113 20:30:54.743992 2250 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:30:54.744142 kubelet[2250]: I0113 20:30:54.744021 2250 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:30:54.744142 kubelet[2250]: I0113 20:30:54.744050 2250 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:30:54.744142 kubelet[2250]: E0113 20:30:54.744101 2250 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:30:54.744754 kubelet[2250]: W0113 20:30:54.744672 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:54.744754 kubelet[2250]: E0113 20:30:54.744709 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:54.747585 kubelet[2250]: I0113 20:30:54.747558 2250 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:30:54.747585 kubelet[2250]: I0113 20:30:54.747586 2250 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:30:54.747701 kubelet[2250]: I0113 20:30:54.747605 2250 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:30:54.751275 kubelet[2250]: I0113 20:30:54.751246 2250 policy_none.go:49] "None policy: Start" Jan 13 20:30:54.751881 kubelet[2250]: I0113 20:30:54.751864 2250 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:30:54.751980 kubelet[2250]: I0113 20:30:54.751968 2250 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:30:54.762409 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:30:54.772755 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:30:54.775298 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:30:54.787301 kubelet[2250]: I0113 20:30:54.787077 2250 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:30:54.787457 kubelet[2250]: I0113 20:30:54.787358 2250 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:30:54.789223 kubelet[2250]: E0113 20:30:54.789171 2250 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 20:30:54.827158 kubelet[2250]: I0113 20:30:54.827122 2250 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:30:54.833273 kubelet[2250]: E0113 20:30:54.833240 2250 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jan 13 20:30:54.844394 kubelet[2250]: I0113 20:30:54.844361 2250 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:30:54.845596 kubelet[2250]: I0113 20:30:54.845515 2250 topology_manager.go:215] "Topology Admit Handler" podUID="3affe810f12d08aea0764519639209dd" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:30:54.846620 kubelet[2250]: I0113 20:30:54.846477 2250 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:30:54.851826 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Jan 13 20:30:54.871440 systemd[1]: Created slice kubepods-burstable-pod3affe810f12d08aea0764519639209dd.slice - libcontainer container kubepods-burstable-pod3affe810f12d08aea0764519639209dd.slice. Jan 13 20:30:54.885863 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Jan 13 20:30:54.926854 kubelet[2250]: E0113 20:30:54.926820 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Jan 13 20:30:55.027283 kubelet[2250]: I0113 20:30:55.027241 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3affe810f12d08aea0764519639209dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3affe810f12d08aea0764519639209dd\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:30:55.027283 kubelet[2250]: I0113 20:30:55.027287 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:55.027684 kubelet[2250]: I0113 20:30:55.027314 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:55.027684 kubelet[2250]: I0113 20:30:55.027336 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:55.027684 kubelet[2250]: I0113 20:30:55.027356 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:30:55.027684 kubelet[2250]: I0113 20:30:55.027375 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3affe810f12d08aea0764519639209dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3affe810f12d08aea0764519639209dd\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:30:55.027684 kubelet[2250]: I0113 20:30:55.027416 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3affe810f12d08aea0764519639209dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3affe810f12d08aea0764519639209dd\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:30:55.027808 kubelet[2250]: I0113 20:30:55.027445 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:55.027808 kubelet[2250]: I0113 20:30:55.027467 2250 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:30:55.035295 kubelet[2250]: I0113 20:30:55.035260 2250 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:30:55.035647 kubelet[2250]: E0113 20:30:55.035621 2250 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jan 13 20:30:55.171209 kubelet[2250]: E0113 20:30:55.171159 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:55.171899 containerd[1444]: time="2025-01-13T20:30:55.171829904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 20:30:55.184267 kubelet[2250]: E0113 20:30:55.183982 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:55.184876 containerd[1444]: time="2025-01-13T20:30:55.184586756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3affe810f12d08aea0764519639209dd,Namespace:kube-system,Attempt:0,}" Jan 13 20:30:55.187936 kubelet[2250]: E0113 20:30:55.187913 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:55.188480 containerd[1444]: time="2025-01-13T20:30:55.188436265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 20:30:55.328325 kubelet[2250]: E0113 20:30:55.328098 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Jan 13 20:30:55.437717 kubelet[2250]: I0113 20:30:55.437678 2250 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:30:55.438014 kubelet[2250]: E0113 20:30:55.437988 2250 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Jan 13 20:30:55.629337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079812922.mount: Deactivated successfully. Jan 13 20:30:55.634717 containerd[1444]: time="2025-01-13T20:30:55.634661906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:30:55.635976 containerd[1444]: time="2025-01-13T20:30:55.635921258Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:30:55.637084 containerd[1444]: time="2025-01-13T20:30:55.637048960Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:30:55.637731 containerd[1444]: time="2025-01-13T20:30:55.637683582Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 20:30:55.638104 containerd[1444]: time="2025-01-13T20:30:55.638073007Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:30:55.639760 containerd[1444]: time="2025-01-13T20:30:55.639731396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:30:55.640512 containerd[1444]: time="2025-01-13T20:30:55.640471195Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:30:55.645093 containerd[1444]: time="2025-01-13T20:30:55.645042679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:30:55.646021 containerd[1444]: time="2025-01-13T20:30:55.645994833Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 474.074932ms" Jan 13 20:30:55.646825 containerd[1444]: time="2025-01-13T20:30:55.646752215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 458.237488ms" Jan 13 20:30:55.649287 containerd[1444]: time="2025-01-13T20:30:55.649243163Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 464.565529ms" Jan 13 20:30:55.797390 containerd[1444]: time="2025-01-13T20:30:55.796433753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:55.797390 containerd[1444]: time="2025-01-13T20:30:55.796512535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:55.797390 containerd[1444]: time="2025-01-13T20:30:55.796537928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:55.797390 containerd[1444]: time="2025-01-13T20:30:55.796618032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:55.798012 containerd[1444]: time="2025-01-13T20:30:55.797417227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:55.798294 containerd[1444]: time="2025-01-13T20:30:55.798103837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:55.798294 containerd[1444]: time="2025-01-13T20:30:55.798123022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:55.798294 containerd[1444]: time="2025-01-13T20:30:55.798213139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:55.799548 containerd[1444]: time="2025-01-13T20:30:55.799475174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:30:55.799548 containerd[1444]: time="2025-01-13T20:30:55.799518751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:30:55.799666 containerd[1444]: time="2025-01-13T20:30:55.799530005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:55.799666 containerd[1444]: time="2025-01-13T20:30:55.799592046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:30:55.809091 kubelet[2250]: W0113 20:30:55.809048 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:55.809266 kubelet[2250]: E0113 20:30:55.809253 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:55.822606 systemd[1]: Started cri-containerd-4fc045ce36432d38502fab1c4b7224e1159d504f5b6a2832f5fd79f4f5e375b7.scope - libcontainer container 4fc045ce36432d38502fab1c4b7224e1159d504f5b6a2832f5fd79f4f5e375b7. Jan 13 20:30:55.825871 systemd[1]: Started cri-containerd-88d74f828433fb94ada2c8fc404b3a405b852905c6e869a98c9d194384d58182.scope - libcontainer container 88d74f828433fb94ada2c8fc404b3a405b852905c6e869a98c9d194384d58182. Jan 13 20:30:55.826963 systemd[1]: Started cri-containerd-d429119e565e4dedb8306291961d97b3a921ce78428eeea85616fa815dfa2774.scope - libcontainer container d429119e565e4dedb8306291961d97b3a921ce78428eeea85616fa815dfa2774. Jan 13 20:30:55.855887 containerd[1444]: time="2025-01-13T20:30:55.855792278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fc045ce36432d38502fab1c4b7224e1159d504f5b6a2832f5fd79f4f5e375b7\"" Jan 13 20:30:55.856912 kubelet[2250]: E0113 20:30:55.856885 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:55.861454 containerd[1444]: time="2025-01-13T20:30:55.861365581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"88d74f828433fb94ada2c8fc404b3a405b852905c6e869a98c9d194384d58182\"" Jan 13 20:30:55.862365 containerd[1444]: time="2025-01-13T20:30:55.862327027Z" level=info msg="CreateContainer within sandbox \"4fc045ce36432d38502fab1c4b7224e1159d504f5b6a2832f5fd79f4f5e375b7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:30:55.862593 kubelet[2250]: E0113 20:30:55.862570 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:55.863712 containerd[1444]: time="2025-01-13T20:30:55.863680821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3affe810f12d08aea0764519639209dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d429119e565e4dedb8306291961d97b3a921ce78428eeea85616fa815dfa2774\"" Jan 13 20:30:55.864864 kubelet[2250]: E0113 20:30:55.864635 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:55.865353 containerd[1444]: time="2025-01-13T20:30:55.865320105Z" level=info msg="CreateContainer within sandbox \"88d74f828433fb94ada2c8fc404b3a405b852905c6e869a98c9d194384d58182\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:30:55.867157 containerd[1444]: time="2025-01-13T20:30:55.867121240Z" level=info msg="CreateContainer within sandbox \"d429119e565e4dedb8306291961d97b3a921ce78428eeea85616fa815dfa2774\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:30:55.880734 containerd[1444]: time="2025-01-13T20:30:55.880617450Z" level=info msg="CreateContainer within sandbox \"4fc045ce36432d38502fab1c4b7224e1159d504f5b6a2832f5fd79f4f5e375b7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"844231069ae9b914639c29cc984b3eec7fd440b56bbc213d484b4192c2fd346c\"" Jan 13 20:30:55.881820 containerd[1444]: time="2025-01-13T20:30:55.881787406Z" level=info msg="StartContainer for \"844231069ae9b914639c29cc984b3eec7fd440b56bbc213d484b4192c2fd346c\"" Jan 13 20:30:55.884838 containerd[1444]: time="2025-01-13T20:30:55.884789497Z" level=info msg="CreateContainer within sandbox \"d429119e565e4dedb8306291961d97b3a921ce78428eeea85616fa815dfa2774\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"11d393e85219bd00bdde4975d4299a1f8e9decea3f0c28795d7b687171fb5605\"" Jan 13 20:30:55.885242 containerd[1444]: time="2025-01-13T20:30:55.885181485Z" level=info msg="CreateContainer within sandbox \"88d74f828433fb94ada2c8fc404b3a405b852905c6e869a98c9d194384d58182\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cf340afc81efaf9f99e16731cc4617a54d645c5ffac181f2053a76dfe7ea0e30\"" Jan 13 20:30:55.886635 containerd[1444]: time="2025-01-13T20:30:55.885551444Z" level=info msg="StartContainer for \"cf340afc81efaf9f99e16731cc4617a54d645c5ffac181f2053a76dfe7ea0e30\"" Jan 13 20:30:55.886635 containerd[1444]: time="2025-01-13T20:30:55.885552205Z" level=info msg="StartContainer for \"11d393e85219bd00bdde4975d4299a1f8e9decea3f0c28795d7b687171fb5605\"" Jan 13 20:30:55.907605 systemd[1]: Started cri-containerd-844231069ae9b914639c29cc984b3eec7fd440b56bbc213d484b4192c2fd346c.scope - libcontainer container 844231069ae9b914639c29cc984b3eec7fd440b56bbc213d484b4192c2fd346c. Jan 13 20:30:55.911416 systemd[1]: Started cri-containerd-11d393e85219bd00bdde4975d4299a1f8e9decea3f0c28795d7b687171fb5605.scope - libcontainer container 11d393e85219bd00bdde4975d4299a1f8e9decea3f0c28795d7b687171fb5605. Jan 13 20:30:55.912518 systemd[1]: Started cri-containerd-cf340afc81efaf9f99e16731cc4617a54d645c5ffac181f2053a76dfe7ea0e30.scope - libcontainer container cf340afc81efaf9f99e16731cc4617a54d645c5ffac181f2053a76dfe7ea0e30. Jan 13 20:30:55.954581 containerd[1444]: time="2025-01-13T20:30:55.954403232Z" level=info msg="StartContainer for \"11d393e85219bd00bdde4975d4299a1f8e9decea3f0c28795d7b687171fb5605\" returns successfully" Jan 13 20:30:55.962047 kubelet[2250]: W0113 20:30:55.961989 2250 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:55.962047 kubelet[2250]: E0113 20:30:55.962051 2250 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.144:6443: connect: connection refused Jan 13 20:30:55.984617 containerd[1444]: time="2025-01-13T20:30:55.984560474Z" level=info msg="StartContainer for \"cf340afc81efaf9f99e16731cc4617a54d645c5ffac181f2053a76dfe7ea0e30\" returns successfully" Jan 13 20:30:55.984876 containerd[1444]: time="2025-01-13T20:30:55.984571768Z" level=info msg="StartContainer for \"844231069ae9b914639c29cc984b3eec7fd440b56bbc213d484b4192c2fd346c\" returns successfully" Jan 13 20:30:56.129212 kubelet[2250]: E0113 20:30:56.129158 2250 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="1.6s" Jan 13 20:30:56.241929 kubelet[2250]: I0113 20:30:56.241476 2250 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:30:56.753305 kubelet[2250]: E0113 20:30:56.753269 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:56.754329 kubelet[2250]: E0113 20:30:56.754249 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:56.755430 kubelet[2250]: E0113 20:30:56.755411 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:57.759636 kubelet[2250]: E0113 20:30:57.759554 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:58.194770 kubelet[2250]: E0113 20:30:58.194598 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:30:58.285298 kubelet[2250]: E0113 20:30:58.285258 2250 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 20:30:58.365162 kubelet[2250]: I0113 20:30:58.365117 2250 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:30:58.721334 kubelet[2250]: I0113 20:30:58.721292 2250 apiserver.go:52] "Watching apiserver" Jan 13 20:30:58.725950 kubelet[2250]: I0113 20:30:58.725889 2250 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:31:00.079215 kubelet[2250]: E0113 20:31:00.079152 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:00.763186 kubelet[2250]: E0113 20:31:00.763153 2250 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:01.021168 systemd[1]: Reloading requested from client PID 2532 ('systemctl') (unit session-7.scope)... Jan 13 20:31:01.021186 systemd[1]: Reloading... Jan 13 20:31:01.091452 zram_generator::config[2574]: No configuration found. Jan 13 20:31:01.170438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:31:01.234544 systemd[1]: Reloading finished in 213 ms. Jan 13 20:31:01.268400 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:31:01.268752 kubelet[2250]: I0113 20:31:01.268575 2250 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:31:01.277737 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:31:01.277926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:31:01.277971 systemd[1]: kubelet.service: Consumed 1.247s CPU time, 113.0M memory peak, 0B memory swap peak. Jan 13 20:31:01.291768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:31:01.380898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:31:01.384912 (kubelet)[2613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:31:01.426610 kubelet[2613]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:31:01.426610 kubelet[2613]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:31:01.426610 kubelet[2613]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:31:01.426940 kubelet[2613]: I0113 20:31:01.426652 2613 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:31:01.430948 kubelet[2613]: I0113 20:31:01.430633 2613 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:31:01.430948 kubelet[2613]: I0113 20:31:01.430659 2613 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:31:01.430948 kubelet[2613]: I0113 20:31:01.430810 2613 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:31:01.436464 kubelet[2613]: I0113 20:31:01.434852 2613 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:31:01.436880 kubelet[2613]: I0113 20:31:01.436859 2613 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:31:01.442807 kubelet[2613]: I0113 20:31:01.442785 2613 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:31:01.443028 kubelet[2613]: I0113 20:31:01.443015 2613 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:31:01.443191 kubelet[2613]: I0113 20:31:01.443177 2613 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:31:01.443307 kubelet[2613]: I0113 20:31:01.443195 2613 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:31:01.443307 kubelet[2613]: I0113 20:31:01.443204 2613 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:31:01.443307 kubelet[2613]: I0113 20:31:01.443234 2613 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:31:01.443405 kubelet[2613]: I0113 20:31:01.443323 2613 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:31:01.443405 kubelet[2613]: I0113 20:31:01.443337 2613 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:31:01.443405 kubelet[2613]: I0113 20:31:01.443356 2613 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:31:01.443405 kubelet[2613]: I0113 20:31:01.443369 2613 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:31:01.446935 kubelet[2613]: I0113 20:31:01.446749 2613 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:31:01.447818 kubelet[2613]: I0113 20:31:01.447798 2613 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:31:01.448489 kubelet[2613]: I0113 20:31:01.448460 2613 server.go:1256] "Started kubelet" Jan 13 20:31:01.448984 kubelet[2613]: I0113 20:31:01.448956 2613 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:31:01.449291 kubelet[2613]: I0113 20:31:01.449275 2613 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:31:01.449996 kubelet[2613]: I0113 20:31:01.449969 2613 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:31:01.450633 kubelet[2613]: I0113 20:31:01.450617 2613 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:31:01.451701 kubelet[2613]: I0113 20:31:01.451672 2613 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:31:01.454097 kubelet[2613]: I0113 20:31:01.454074 2613 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:31:01.454323 kubelet[2613]: I0113 20:31:01.454306 2613 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:31:01.454851 kubelet[2613]: I0113 20:31:01.454837 2613 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:31:01.469802 kubelet[2613]: E0113 20:31:01.466928 2613 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:31:01.469802 kubelet[2613]: I0113 20:31:01.467118 2613 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:31:01.469802 kubelet[2613]: I0113 20:31:01.467322 2613 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:31:01.469802 kubelet[2613]: I0113 20:31:01.469151 2613 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:31:01.480452 kubelet[2613]: I0113 20:31:01.480420 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:31:01.481951 kubelet[2613]: I0113 20:31:01.481922 2613 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:31:01.482039 kubelet[2613]: I0113 20:31:01.482030 2613 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:31:01.482121 kubelet[2613]: I0113 20:31:01.482111 2613 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:31:01.482226 kubelet[2613]: E0113 20:31:01.482209 2613 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:31:01.506954 kubelet[2613]: I0113 20:31:01.506927 2613 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:31:01.507198 kubelet[2613]: I0113 20:31:01.507185 2613 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:31:01.507267 kubelet[2613]: I0113 20:31:01.507258 2613 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:31:01.507503 kubelet[2613]: I0113 20:31:01.507477 2613 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:31:01.507603 kubelet[2613]: I0113 20:31:01.507591 2613 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:31:01.507662 kubelet[2613]: I0113 20:31:01.507653 2613 policy_none.go:49] "None policy: Start" Jan 13 20:31:01.508460 kubelet[2613]: I0113 20:31:01.508430 2613 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:31:01.508545 kubelet[2613]: I0113 20:31:01.508475 2613 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:31:01.508655 kubelet[2613]: I0113 20:31:01.508636 2613 state_mem.go:75] "Updated machine memory state" Jan 13 20:31:01.512514 kubelet[2613]: I0113 20:31:01.512493 2613 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:31:01.513403 kubelet[2613]: I0113 20:31:01.513215 2613 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:31:01.558417 kubelet[2613]: I0113 20:31:01.558203 2613 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 20:31:01.569594 kubelet[2613]: I0113 20:31:01.569505 2613 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 20:31:01.571000 kubelet[2613]: I0113 20:31:01.569625 2613 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 20:31:01.582499 kubelet[2613]: I0113 20:31:01.582465 2613 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 20:31:01.583293 kubelet[2613]: I0113 20:31:01.582719 2613 topology_manager.go:215] "Topology Admit Handler" podUID="3affe810f12d08aea0764519639209dd" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 20:31:01.583293 kubelet[2613]: I0113 20:31:01.582793 2613 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 20:31:01.593589 kubelet[2613]: E0113 20:31:01.593550 2613 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:31:01.655982 kubelet[2613]: I0113 20:31:01.655942 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 20:31:01.655982 kubelet[2613]: I0113 20:31:01.655991 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:31:01.656131 kubelet[2613]: I0113 20:31:01.656016 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:31:01.656131 kubelet[2613]: I0113 20:31:01.656036 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:31:01.656131 kubelet[2613]: I0113 20:31:01.656058 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:31:01.656131 kubelet[2613]: I0113 20:31:01.656111 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3affe810f12d08aea0764519639209dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3affe810f12d08aea0764519639209dd\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:31:01.656229 kubelet[2613]: I0113 20:31:01.656148 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3affe810f12d08aea0764519639209dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3affe810f12d08aea0764519639209dd\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:31:01.656229 kubelet[2613]: I0113 20:31:01.656180 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3affe810f12d08aea0764519639209dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3affe810f12d08aea0764519639209dd\") " pod="kube-system/kube-apiserver-localhost" Jan 13 20:31:01.656229 kubelet[2613]: I0113 20:31:01.656202 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 20:31:01.892835 kubelet[2613]: E0113 20:31:01.892708 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:01.894435 kubelet[2613]: E0113 20:31:01.894402 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:01.895424 kubelet[2613]: E0113 20:31:01.894693 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:02.444523 kubelet[2613]: I0113 20:31:02.444470 2613 apiserver.go:52] "Watching apiserver" Jan 13 20:31:02.455452 kubelet[2613]: I0113 20:31:02.455394 2613 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:31:02.496183 kubelet[2613]: E0113 20:31:02.496137 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:02.496440 kubelet[2613]: E0113 20:31:02.496426 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:02.510393 kubelet[2613]: E0113 20:31:02.510345 2613 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 13 20:31:02.510856 kubelet[2613]: E0113 20:31:02.510837 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:02.516058 kubelet[2613]: I0113 20:31:02.516021 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.515985087 podStartE2EDuration="1.515985087s" podCreationTimestamp="2025-01-13 20:31:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:31:02.515937527 +0000 UTC m=+1.127390513" watchObservedRunningTime="2025-01-13 20:31:02.515985087 +0000 UTC m=+1.127438033" Jan 13 20:31:02.531142 kubelet[2613]: I0113 20:31:02.530967 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.5309309410000003 podStartE2EDuration="2.530930941s" podCreationTimestamp="2025-01-13 20:31:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:31:02.52433802 +0000 UTC m=+1.135791006" watchObservedRunningTime="2025-01-13 20:31:02.530930941 +0000 UTC m=+1.142383927" Jan 13 20:31:02.531142 kubelet[2613]: I0113 20:31:02.531109 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.531092154 podStartE2EDuration="1.531092154s" podCreationTimestamp="2025-01-13 20:31:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:31:02.530899275 +0000 UTC m=+1.142352261" watchObservedRunningTime="2025-01-13 20:31:02.531092154 +0000 UTC m=+1.142545140" Jan 13 20:31:03.498925 kubelet[2613]: E0113 20:31:03.497059 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:03.707214 kubelet[2613]: E0113 20:31:03.707152 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:05.158397 kubelet[2613]: E0113 20:31:05.158336 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:05.501598 kubelet[2613]: E0113 20:31:05.500967 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:05.565738 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 13 20:31:05.567303 sshd[1621]: Connection closed by 10.0.0.1 port 33026 Jan 13 20:31:05.567779 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:05.571349 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:33026.service: Deactivated successfully. Jan 13 20:31:05.573339 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:31:05.573785 systemd[1]: session-7.scope: Consumed 6.617s CPU time, 188.9M memory peak, 0B memory swap peak. Jan 13 20:31:05.574972 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:31:05.576157 systemd-logind[1423]: Removed session 7. Jan 13 20:31:09.991474 kubelet[2613]: E0113 20:31:09.991375 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:10.510102 kubelet[2613]: E0113 20:31:10.510075 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:13.714830 kubelet[2613]: E0113 20:31:13.714544 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:14.099508 update_engine[1430]: I20250113 20:31:14.099421 1430 update_attempter.cc:509] Updating boot flags... Jan 13 20:31:14.132523 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2708) Jan 13 20:31:14.186567 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2712) Jan 13 20:31:14.956527 kubelet[2613]: I0113 20:31:14.956492 2613 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:31:14.978542 containerd[1444]: time="2025-01-13T20:31:14.978483737Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:31:14.979408 kubelet[2613]: I0113 20:31:14.979266 2613 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:31:15.020564 kubelet[2613]: I0113 20:31:15.020525 2613 topology_manager.go:215] "Topology Admit Handler" podUID="f1541724-1397-4a02-a00e-2ed5137e0801" podNamespace="kube-system" podName="kube-proxy-zdpbp" Jan 13 20:31:15.030861 systemd[1]: Created slice kubepods-besteffort-podf1541724_1397_4a02_a00e_2ed5137e0801.slice - libcontainer container kubepods-besteffort-podf1541724_1397_4a02_a00e_2ed5137e0801.slice. Jan 13 20:31:15.041707 kubelet[2613]: I0113 20:31:15.041674 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1541724-1397-4a02-a00e-2ed5137e0801-kube-proxy\") pod \"kube-proxy-zdpbp\" (UID: \"f1541724-1397-4a02-a00e-2ed5137e0801\") " pod="kube-system/kube-proxy-zdpbp" Jan 13 20:31:15.041821 kubelet[2613]: I0113 20:31:15.041720 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1541724-1397-4a02-a00e-2ed5137e0801-xtables-lock\") pod \"kube-proxy-zdpbp\" (UID: \"f1541724-1397-4a02-a00e-2ed5137e0801\") " pod="kube-system/kube-proxy-zdpbp" Jan 13 20:31:15.041821 kubelet[2613]: I0113 20:31:15.041748 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8q2\" (UniqueName: \"kubernetes.io/projected/f1541724-1397-4a02-a00e-2ed5137e0801-kube-api-access-dv8q2\") pod \"kube-proxy-zdpbp\" (UID: \"f1541724-1397-4a02-a00e-2ed5137e0801\") " pod="kube-system/kube-proxy-zdpbp" Jan 13 20:31:15.041821 kubelet[2613]: I0113 20:31:15.041767 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1541724-1397-4a02-a00e-2ed5137e0801-lib-modules\") pod \"kube-proxy-zdpbp\" (UID: \"f1541724-1397-4a02-a00e-2ed5137e0801\") " pod="kube-system/kube-proxy-zdpbp" Jan 13 20:31:15.075015 kubelet[2613]: I0113 20:31:15.074969 2613 topology_manager.go:215] "Topology Admit Handler" podUID="bec397c0-faa5-4c79-aeec-8babe27b226a" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-cpzdh" Jan 13 20:31:15.085316 systemd[1]: Created slice kubepods-besteffort-podbec397c0_faa5_4c79_aeec_8babe27b226a.slice - libcontainer container kubepods-besteffort-podbec397c0_faa5_4c79_aeec_8babe27b226a.slice. Jan 13 20:31:15.142620 kubelet[2613]: I0113 20:31:15.142571 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bec397c0-faa5-4c79-aeec-8babe27b226a-var-lib-calico\") pod \"tigera-operator-c7ccbd65-cpzdh\" (UID: \"bec397c0-faa5-4c79-aeec-8babe27b226a\") " pod="tigera-operator/tigera-operator-c7ccbd65-cpzdh" Jan 13 20:31:15.142620 kubelet[2613]: I0113 20:31:15.142621 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl9lr\" (UniqueName: \"kubernetes.io/projected/bec397c0-faa5-4c79-aeec-8babe27b226a-kube-api-access-vl9lr\") pod \"tigera-operator-c7ccbd65-cpzdh\" (UID: \"bec397c0-faa5-4c79-aeec-8babe27b226a\") " pod="tigera-operator/tigera-operator-c7ccbd65-cpzdh" Jan 13 20:31:15.338444 kubelet[2613]: E0113 20:31:15.338402 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:15.339602 containerd[1444]: time="2025-01-13T20:31:15.339558607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zdpbp,Uid:f1541724-1397-4a02-a00e-2ed5137e0801,Namespace:kube-system,Attempt:0,}" Jan 13 20:31:15.366481 containerd[1444]: time="2025-01-13T20:31:15.366096676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:31:15.366481 containerd[1444]: time="2025-01-13T20:31:15.366146574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:31:15.366481 containerd[1444]: time="2025-01-13T20:31:15.366158538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:15.366481 containerd[1444]: time="2025-01-13T20:31:15.366232325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:15.387865 containerd[1444]: time="2025-01-13T20:31:15.387823588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-cpzdh,Uid:bec397c0-faa5-4c79-aeec-8babe27b226a,Namespace:tigera-operator,Attempt:0,}" Jan 13 20:31:15.392648 systemd[1]: Started cri-containerd-875481f5cf90be50edd42374fbeb57ff0c5741c43d991400181153413d833ac7.scope - libcontainer container 875481f5cf90be50edd42374fbeb57ff0c5741c43d991400181153413d833ac7. Jan 13 20:31:15.420351 containerd[1444]: time="2025-01-13T20:31:15.420255160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zdpbp,Uid:f1541724-1397-4a02-a00e-2ed5137e0801,Namespace:kube-system,Attempt:0,} returns sandbox id \"875481f5cf90be50edd42374fbeb57ff0c5741c43d991400181153413d833ac7\"" Jan 13 20:31:15.424369 kubelet[2613]: E0113 20:31:15.424341 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:15.429253 containerd[1444]: time="2025-01-13T20:31:15.429211155Z" level=info msg="CreateContainer within sandbox \"875481f5cf90be50edd42374fbeb57ff0c5741c43d991400181153413d833ac7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:31:15.436318 containerd[1444]: time="2025-01-13T20:31:15.436066681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:31:15.436318 containerd[1444]: time="2025-01-13T20:31:15.436131664Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:31:15.436318 containerd[1444]: time="2025-01-13T20:31:15.436146029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:15.436318 containerd[1444]: time="2025-01-13T20:31:15.436233741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:15.454581 systemd[1]: Started cri-containerd-003290ced274ada9d89e68e735e1effffd786976e478368a57c515827410acac.scope - libcontainer container 003290ced274ada9d89e68e735e1effffd786976e478368a57c515827410acac. Jan 13 20:31:15.455789 containerd[1444]: time="2025-01-13T20:31:15.455723455Z" level=info msg="CreateContainer within sandbox \"875481f5cf90be50edd42374fbeb57ff0c5741c43d991400181153413d833ac7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a25af7f5e20d5a14f8e5ee2416073d6276a5e32b0f247c4262a976ea36220542\"" Jan 13 20:31:15.456832 containerd[1444]: time="2025-01-13T20:31:15.456511736Z" level=info msg="StartContainer for \"a25af7f5e20d5a14f8e5ee2416073d6276a5e32b0f247c4262a976ea36220542\"" Jan 13 20:31:15.487589 systemd[1]: Started cri-containerd-a25af7f5e20d5a14f8e5ee2416073d6276a5e32b0f247c4262a976ea36220542.scope - libcontainer container a25af7f5e20d5a14f8e5ee2416073d6276a5e32b0f247c4262a976ea36220542. Jan 13 20:31:15.489140 containerd[1444]: time="2025-01-13T20:31:15.489092320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-cpzdh,Uid:bec397c0-faa5-4c79-aeec-8babe27b226a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"003290ced274ada9d89e68e735e1effffd786976e478368a57c515827410acac\"" Jan 13 20:31:15.495252 containerd[1444]: time="2025-01-13T20:31:15.495199379Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 20:31:15.519667 containerd[1444]: time="2025-01-13T20:31:15.519611689Z" level=info msg="StartContainer for \"a25af7f5e20d5a14f8e5ee2416073d6276a5e32b0f247c4262a976ea36220542\" returns successfully" Jan 13 20:31:15.525131 kubelet[2613]: E0113 20:31:15.525060 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:16.419877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2552179638.mount: Deactivated successfully. Jan 13 20:31:16.757470 containerd[1444]: time="2025-01-13T20:31:16.757288856Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:16.758146 containerd[1444]: time="2025-01-13T20:31:16.758060834Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125956" Jan 13 20:31:16.758688 containerd[1444]: time="2025-01-13T20:31:16.758656393Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:16.762044 containerd[1444]: time="2025-01-13T20:31:16.762001872Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:16.762725 containerd[1444]: time="2025-01-13T20:31:16.762694904Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 1.267449189s" Jan 13 20:31:16.762776 containerd[1444]: time="2025-01-13T20:31:16.762730436Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 13 20:31:16.771729 containerd[1444]: time="2025-01-13T20:31:16.771688713Z" level=info msg="CreateContainer within sandbox \"003290ced274ada9d89e68e735e1effffd786976e478368a57c515827410acac\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 20:31:16.784153 containerd[1444]: time="2025-01-13T20:31:16.784101225Z" level=info msg="CreateContainer within sandbox \"003290ced274ada9d89e68e735e1effffd786976e478368a57c515827410acac\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"13ea5232780e4afe0f6fb242da60ec61ca0ae99eeef7bc08f234cb9f225dbd3d\"" Jan 13 20:31:16.784694 containerd[1444]: time="2025-01-13T20:31:16.784669175Z" level=info msg="StartContainer for \"13ea5232780e4afe0f6fb242da60ec61ca0ae99eeef7bc08f234cb9f225dbd3d\"" Jan 13 20:31:16.813584 systemd[1]: Started cri-containerd-13ea5232780e4afe0f6fb242da60ec61ca0ae99eeef7bc08f234cb9f225dbd3d.scope - libcontainer container 13ea5232780e4afe0f6fb242da60ec61ca0ae99eeef7bc08f234cb9f225dbd3d. Jan 13 20:31:16.838148 containerd[1444]: time="2025-01-13T20:31:16.838102168Z" level=info msg="StartContainer for \"13ea5232780e4afe0f6fb242da60ec61ca0ae99eeef7bc08f234cb9f225dbd3d\" returns successfully" Jan 13 20:31:17.554234 kubelet[2613]: I0113 20:31:17.554177 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zdpbp" podStartSLOduration=2.55414036 podStartE2EDuration="2.55414036s" podCreationTimestamp="2025-01-13 20:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:31:15.535464386 +0000 UTC m=+14.146917372" watchObservedRunningTime="2025-01-13 20:31:17.55414036 +0000 UTC m=+16.165593306" Jan 13 20:31:20.746409 kubelet[2613]: I0113 20:31:20.745884 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-cpzdh" podStartSLOduration=4.47239624 podStartE2EDuration="5.745834734s" podCreationTimestamp="2025-01-13 20:31:15 +0000 UTC" firstStartedPulling="2025-01-13 20:31:15.49065996 +0000 UTC m=+14.102112906" lastFinishedPulling="2025-01-13 20:31:16.764098454 +0000 UTC m=+15.375551400" observedRunningTime="2025-01-13 20:31:17.554438014 +0000 UTC m=+16.165891000" watchObservedRunningTime="2025-01-13 20:31:20.745834734 +0000 UTC m=+19.357287720" Jan 13 20:31:20.746914 kubelet[2613]: I0113 20:31:20.746596 2613 topology_manager.go:215] "Topology Admit Handler" podUID="ba72654f-640d-49a4-a557-eadc9d53d710" podNamespace="calico-system" podName="calico-typha-85d5fd46b5-9pcmj" Jan 13 20:31:20.755423 systemd[1]: Created slice kubepods-besteffort-podba72654f_640d_49a4_a557_eadc9d53d710.slice - libcontainer container kubepods-besteffort-podba72654f_640d_49a4_a557_eadc9d53d710.slice. Jan 13 20:31:20.881682 kubelet[2613]: I0113 20:31:20.881642 2613 topology_manager.go:215] "Topology Admit Handler" podUID="172de8a6-4e90-44e0-99bc-a236f33d859d" podNamespace="calico-system" podName="calico-node-kgvnf" Jan 13 20:31:20.888982 kubelet[2613]: I0113 20:31:20.888847 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcqcw\" (UniqueName: \"kubernetes.io/projected/ba72654f-640d-49a4-a557-eadc9d53d710-kube-api-access-zcqcw\") pod \"calico-typha-85d5fd46b5-9pcmj\" (UID: \"ba72654f-640d-49a4-a557-eadc9d53d710\") " pod="calico-system/calico-typha-85d5fd46b5-9pcmj" Jan 13 20:31:20.888982 kubelet[2613]: I0113 20:31:20.888891 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba72654f-640d-49a4-a557-eadc9d53d710-tigera-ca-bundle\") pod \"calico-typha-85d5fd46b5-9pcmj\" (UID: \"ba72654f-640d-49a4-a557-eadc9d53d710\") " pod="calico-system/calico-typha-85d5fd46b5-9pcmj" Jan 13 20:31:20.888982 kubelet[2613]: I0113 20:31:20.888913 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ba72654f-640d-49a4-a557-eadc9d53d710-typha-certs\") pod \"calico-typha-85d5fd46b5-9pcmj\" (UID: \"ba72654f-640d-49a4-a557-eadc9d53d710\") " pod="calico-system/calico-typha-85d5fd46b5-9pcmj" Jan 13 20:31:20.889555 systemd[1]: Created slice kubepods-besteffort-pod172de8a6_4e90_44e0_99bc_a236f33d859d.slice - libcontainer container kubepods-besteffort-pod172de8a6_4e90_44e0_99bc_a236f33d859d.slice. Jan 13 20:31:20.989640 kubelet[2613]: I0113 20:31:20.989601 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/172de8a6-4e90-44e0-99bc-a236f33d859d-var-run-calico\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:20.989640 kubelet[2613]: I0113 20:31:20.989646 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/172de8a6-4e90-44e0-99bc-a236f33d859d-var-lib-calico\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:20.989814 kubelet[2613]: I0113 20:31:20.989668 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/172de8a6-4e90-44e0-99bc-a236f33d859d-tigera-ca-bundle\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:20.989814 kubelet[2613]: I0113 20:31:20.989690 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/172de8a6-4e90-44e0-99bc-a236f33d859d-cni-log-dir\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:20.989814 kubelet[2613]: I0113 20:31:20.989710 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llsqq\" (UniqueName: \"kubernetes.io/projected/172de8a6-4e90-44e0-99bc-a236f33d859d-kube-api-access-llsqq\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:20.989814 kubelet[2613]: I0113 20:31:20.989730 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/172de8a6-4e90-44e0-99bc-a236f33d859d-xtables-lock\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:20.989904 kubelet[2613]: I0113 20:31:20.989812 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/172de8a6-4e90-44e0-99bc-a236f33d859d-node-certs\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:20.989904 kubelet[2613]: I0113 20:31:20.989854 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/172de8a6-4e90-44e0-99bc-a236f33d859d-cni-bin-dir\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:20.989904 kubelet[2613]: I0113 20:31:20.989895 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/172de8a6-4e90-44e0-99bc-a236f33d859d-flexvol-driver-host\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:20.989973 kubelet[2613]: I0113 20:31:20.989930 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/172de8a6-4e90-44e0-99bc-a236f33d859d-policysync\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:20.989973 kubelet[2613]: I0113 20:31:20.989957 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/172de8a6-4e90-44e0-99bc-a236f33d859d-cni-net-dir\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:20.990595 kubelet[2613]: I0113 20:31:20.990027 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/172de8a6-4e90-44e0-99bc-a236f33d859d-lib-modules\") pod \"calico-node-kgvnf\" (UID: \"172de8a6-4e90-44e0-99bc-a236f33d859d\") " pod="calico-system/calico-node-kgvnf" Jan 13 20:31:21.077412 kubelet[2613]: E0113 20:31:21.076867 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:21.081031 kubelet[2613]: I0113 20:31:21.080508 2613 topology_manager.go:215] "Topology Admit Handler" podUID="d62b149c-90ef-4582-bf5b-b3dad659f453" podNamespace="calico-system" podName="csi-node-driver-xmsjq" Jan 13 20:31:21.081031 kubelet[2613]: E0113 20:31:21.080737 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xmsjq" podUID="d62b149c-90ef-4582-bf5b-b3dad659f453" Jan 13 20:31:21.081146 containerd[1444]: time="2025-01-13T20:31:21.080605605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85d5fd46b5-9pcmj,Uid:ba72654f-640d-49a4-a557-eadc9d53d710,Namespace:calico-system,Attempt:0,}" Jan 13 20:31:21.145596 containerd[1444]: time="2025-01-13T20:31:21.145353171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:31:21.145596 containerd[1444]: time="2025-01-13T20:31:21.145445513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:31:21.145596 containerd[1444]: time="2025-01-13T20:31:21.145466919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:21.145911 containerd[1444]: time="2025-01-13T20:31:21.145553220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:21.164578 systemd[1]: Started cri-containerd-936d9c3340a6b91522ab0c24633c4e58fedd10381e0b25477b914fbfbd77bbfa.scope - libcontainer container 936d9c3340a6b91522ab0c24633c4e58fedd10381e0b25477b914fbfbd77bbfa. Jan 13 20:31:21.191792 kubelet[2613]: E0113 20:31:21.191652 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.191792 kubelet[2613]: W0113 20:31:21.191679 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.191792 kubelet[2613]: E0113 20:31:21.191705 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.191792 kubelet[2613]: I0113 20:31:21.191736 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrxtz\" (UniqueName: \"kubernetes.io/projected/d62b149c-90ef-4582-bf5b-b3dad659f453-kube-api-access-hrxtz\") pod \"csi-node-driver-xmsjq\" (UID: \"d62b149c-90ef-4582-bf5b-b3dad659f453\") " pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:21.192525 kubelet[2613]: E0113 20:31:21.192250 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:21.192832 kubelet[2613]: E0113 20:31:21.192814 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.192979 kubelet[2613]: W0113 20:31:21.192893 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.192979 kubelet[2613]: E0113 20:31:21.192921 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.192979 kubelet[2613]: I0113 20:31:21.192944 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d62b149c-90ef-4582-bf5b-b3dad659f453-varrun\") pod \"csi-node-driver-xmsjq\" (UID: \"d62b149c-90ef-4582-bf5b-b3dad659f453\") " pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:21.193481 kubelet[2613]: E0113 20:31:21.193351 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.193481 kubelet[2613]: W0113 20:31:21.193363 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.193581 kubelet[2613]: E0113 20:31:21.193478 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.193581 kubelet[2613]: I0113 20:31:21.193518 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d62b149c-90ef-4582-bf5b-b3dad659f453-registration-dir\") pod \"csi-node-driver-xmsjq\" (UID: \"d62b149c-90ef-4582-bf5b-b3dad659f453\") " pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:21.193864 containerd[1444]: time="2025-01-13T20:31:21.193686121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kgvnf,Uid:172de8a6-4e90-44e0-99bc-a236f33d859d,Namespace:calico-system,Attempt:0,}" Jan 13 20:31:21.193939 kubelet[2613]: E0113 20:31:21.193767 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.193939 kubelet[2613]: W0113 20:31:21.193776 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.193939 kubelet[2613]: E0113 20:31:21.193822 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.194153 kubelet[2613]: E0113 20:31:21.194138 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.194346 kubelet[2613]: W0113 20:31:21.194300 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.194346 kubelet[2613]: E0113 20:31:21.194331 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.194708 kubelet[2613]: E0113 20:31:21.194689 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.194708 kubelet[2613]: W0113 20:31:21.194704 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.194783 kubelet[2613]: E0113 20:31:21.194725 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.194783 kubelet[2613]: I0113 20:31:21.194747 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d62b149c-90ef-4582-bf5b-b3dad659f453-kubelet-dir\") pod \"csi-node-driver-xmsjq\" (UID: \"d62b149c-90ef-4582-bf5b-b3dad659f453\") " pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:21.195501 kubelet[2613]: E0113 20:31:21.195476 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.195501 kubelet[2613]: W0113 20:31:21.195496 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.195593 kubelet[2613]: E0113 20:31:21.195512 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.196099 kubelet[2613]: E0113 20:31:21.195765 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.196099 kubelet[2613]: W0113 20:31:21.195778 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.196099 kubelet[2613]: E0113 20:31:21.195790 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.196099 kubelet[2613]: E0113 20:31:21.195992 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.196099 kubelet[2613]: W0113 20:31:21.196001 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.196099 kubelet[2613]: E0113 20:31:21.196016 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.196099 kubelet[2613]: I0113 20:31:21.196035 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d62b149c-90ef-4582-bf5b-b3dad659f453-socket-dir\") pod \"csi-node-driver-xmsjq\" (UID: \"d62b149c-90ef-4582-bf5b-b3dad659f453\") " pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:21.196450 kubelet[2613]: E0113 20:31:21.196372 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.196450 kubelet[2613]: W0113 20:31:21.196398 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.196450 kubelet[2613]: E0113 20:31:21.196413 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.196588 kubelet[2613]: E0113 20:31:21.196570 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.196588 kubelet[2613]: W0113 20:31:21.196581 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.196588 kubelet[2613]: E0113 20:31:21.196591 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.196920 kubelet[2613]: E0113 20:31:21.196740 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.196920 kubelet[2613]: W0113 20:31:21.196749 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.196920 kubelet[2613]: E0113 20:31:21.196758 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.197015 kubelet[2613]: E0113 20:31:21.196952 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.197015 kubelet[2613]: W0113 20:31:21.196961 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.197015 kubelet[2613]: E0113 20:31:21.196972 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.197156 kubelet[2613]: E0113 20:31:21.197139 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.197156 kubelet[2613]: W0113 20:31:21.197151 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.197220 kubelet[2613]: E0113 20:31:21.197161 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.197756 kubelet[2613]: E0113 20:31:21.197550 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.197756 kubelet[2613]: W0113 20:31:21.197565 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.197756 kubelet[2613]: E0113 20:31:21.197577 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.217375 containerd[1444]: time="2025-01-13T20:31:21.217316766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85d5fd46b5-9pcmj,Uid:ba72654f-640d-49a4-a557-eadc9d53d710,Namespace:calico-system,Attempt:0,} returns sandbox id \"936d9c3340a6b91522ab0c24633c4e58fedd10381e0b25477b914fbfbd77bbfa\"" Jan 13 20:31:21.218702 kubelet[2613]: E0113 20:31:21.218674 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:21.222338 containerd[1444]: time="2025-01-13T20:31:21.221912879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 20:31:21.223332 containerd[1444]: time="2025-01-13T20:31:21.223235800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:31:21.223332 containerd[1444]: time="2025-01-13T20:31:21.223317980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:31:21.223474 containerd[1444]: time="2025-01-13T20:31:21.223330543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:21.223474 containerd[1444]: time="2025-01-13T20:31:21.223444410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:21.248554 systemd[1]: Started cri-containerd-60132f54cafd53910002c82a78e2483c8463da594266668a045c92323a0a829d.scope - libcontainer container 60132f54cafd53910002c82a78e2483c8463da594266668a045c92323a0a829d. Jan 13 20:31:21.269954 containerd[1444]: time="2025-01-13T20:31:21.269918989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kgvnf,Uid:172de8a6-4e90-44e0-99bc-a236f33d859d,Namespace:calico-system,Attempt:0,} returns sandbox id \"60132f54cafd53910002c82a78e2483c8463da594266668a045c92323a0a829d\"" Jan 13 20:31:21.271167 kubelet[2613]: E0113 20:31:21.270825 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:21.297264 kubelet[2613]: E0113 20:31:21.297226 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.297264 kubelet[2613]: W0113 20:31:21.297249 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.297264 kubelet[2613]: E0113 20:31:21.297270 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.297572 kubelet[2613]: E0113 20:31:21.297545 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.297572 kubelet[2613]: W0113 20:31:21.297560 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.297639 kubelet[2613]: E0113 20:31:21.297580 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.297801 kubelet[2613]: E0113 20:31:21.297787 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.297801 kubelet[2613]: W0113 20:31:21.297799 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.297859 kubelet[2613]: E0113 20:31:21.297816 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.298136 kubelet[2613]: E0113 20:31:21.298105 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.298136 kubelet[2613]: W0113 20:31:21.298127 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.298196 kubelet[2613]: E0113 20:31:21.298148 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.298324 kubelet[2613]: E0113 20:31:21.298313 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.298355 kubelet[2613]: W0113 20:31:21.298324 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.298355 kubelet[2613]: E0113 20:31:21.298340 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.298522 kubelet[2613]: E0113 20:31:21.298511 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.298522 kubelet[2613]: W0113 20:31:21.298521 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.298570 kubelet[2613]: E0113 20:31:21.298535 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.298747 kubelet[2613]: E0113 20:31:21.298737 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.298776 kubelet[2613]: W0113 20:31:21.298747 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.298776 kubelet[2613]: E0113 20:31:21.298765 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.298921 kubelet[2613]: E0113 20:31:21.298909 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.298921 kubelet[2613]: W0113 20:31:21.298919 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.298975 kubelet[2613]: E0113 20:31:21.298932 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.299071 kubelet[2613]: E0113 20:31:21.299061 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.299071 kubelet[2613]: W0113 20:31:21.299070 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.299163 kubelet[2613]: E0113 20:31:21.299107 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.299207 kubelet[2613]: E0113 20:31:21.299197 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.299207 kubelet[2613]: W0113 20:31:21.299205 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.299260 kubelet[2613]: E0113 20:31:21.299225 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.299350 kubelet[2613]: E0113 20:31:21.299338 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.299350 kubelet[2613]: W0113 20:31:21.299347 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.299425 kubelet[2613]: E0113 20:31:21.299376 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.299552 kubelet[2613]: E0113 20:31:21.299539 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.299552 kubelet[2613]: W0113 20:31:21.299549 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.299598 kubelet[2613]: E0113 20:31:21.299564 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.299804 kubelet[2613]: E0113 20:31:21.299777 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.299804 kubelet[2613]: W0113 20:31:21.299790 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.299860 kubelet[2613]: E0113 20:31:21.299807 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.300046 kubelet[2613]: E0113 20:31:21.300032 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.300046 kubelet[2613]: W0113 20:31:21.300045 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.300100 kubelet[2613]: E0113 20:31:21.300063 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.300234 kubelet[2613]: E0113 20:31:21.300220 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.300234 kubelet[2613]: W0113 20:31:21.300232 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.300292 kubelet[2613]: E0113 20:31:21.300246 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.300699 kubelet[2613]: E0113 20:31:21.300684 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.300699 kubelet[2613]: W0113 20:31:21.300697 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.300764 kubelet[2613]: E0113 20:31:21.300714 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.300904 kubelet[2613]: E0113 20:31:21.300891 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.300904 kubelet[2613]: W0113 20:31:21.300902 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.300967 kubelet[2613]: E0113 20:31:21.300930 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.301050 kubelet[2613]: E0113 20:31:21.301036 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.301050 kubelet[2613]: W0113 20:31:21.301047 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.301108 kubelet[2613]: E0113 20:31:21.301072 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.301195 kubelet[2613]: E0113 20:31:21.301182 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.301195 kubelet[2613]: W0113 20:31:21.301192 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.301250 kubelet[2613]: E0113 20:31:21.301213 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.301353 kubelet[2613]: E0113 20:31:21.301341 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.301353 kubelet[2613]: W0113 20:31:21.301350 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.301414 kubelet[2613]: E0113 20:31:21.301364 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.301570 kubelet[2613]: E0113 20:31:21.301554 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.301570 kubelet[2613]: W0113 20:31:21.301567 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.301633 kubelet[2613]: E0113 20:31:21.301584 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.301743 kubelet[2613]: E0113 20:31:21.301731 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.301743 kubelet[2613]: W0113 20:31:21.301741 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.301795 kubelet[2613]: E0113 20:31:21.301754 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.301914 kubelet[2613]: E0113 20:31:21.301904 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.301945 kubelet[2613]: W0113 20:31:21.301914 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.301945 kubelet[2613]: E0113 20:31:21.301927 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.302075 kubelet[2613]: E0113 20:31:21.302051 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.302075 kubelet[2613]: W0113 20:31:21.302061 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.302075 kubelet[2613]: E0113 20:31:21.302071 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.302253 kubelet[2613]: E0113 20:31:21.302242 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.302253 kubelet[2613]: W0113 20:31:21.302251 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.302324 kubelet[2613]: E0113 20:31:21.302262 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:21.312241 kubelet[2613]: E0113 20:31:21.312202 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:21.312241 kubelet[2613]: W0113 20:31:21.312219 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:21.312241 kubelet[2613]: E0113 20:31:21.312234 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.084928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1605953403.mount: Deactivated successfully. Jan 13 20:31:22.354263 containerd[1444]: time="2025-01-13T20:31:22.354156470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:22.354827 containerd[1444]: time="2025-01-13T20:31:22.354782252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 13 20:31:22.355420 containerd[1444]: time="2025-01-13T20:31:22.355355862Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:22.358018 containerd[1444]: time="2025-01-13T20:31:22.357988940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:22.358643 containerd[1444]: time="2025-01-13T20:31:22.358615043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.1362713s" Jan 13 20:31:22.358699 containerd[1444]: time="2025-01-13T20:31:22.358647170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 13 20:31:22.361321 containerd[1444]: time="2025-01-13T20:31:22.361293291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:31:22.369150 containerd[1444]: time="2025-01-13T20:31:22.369115347Z" level=info msg="CreateContainer within sandbox \"936d9c3340a6b91522ab0c24633c4e58fedd10381e0b25477b914fbfbd77bbfa\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 20:31:22.378988 containerd[1444]: time="2025-01-13T20:31:22.378938739Z" level=info msg="CreateContainer within sandbox \"936d9c3340a6b91522ab0c24633c4e58fedd10381e0b25477b914fbfbd77bbfa\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a91b5836fb43b400b27bba3213b854da5f99bc4ffa6c3dd312ebf7115e006b51\"" Jan 13 20:31:22.379429 containerd[1444]: time="2025-01-13T20:31:22.379403524Z" level=info msg="StartContainer for \"a91b5836fb43b400b27bba3213b854da5f99bc4ffa6c3dd312ebf7115e006b51\"" Jan 13 20:31:22.402633 systemd[1]: Started cri-containerd-a91b5836fb43b400b27bba3213b854da5f99bc4ffa6c3dd312ebf7115e006b51.scope - libcontainer container a91b5836fb43b400b27bba3213b854da5f99bc4ffa6c3dd312ebf7115e006b51. Jan 13 20:31:22.441030 containerd[1444]: time="2025-01-13T20:31:22.440966347Z" level=info msg="StartContainer for \"a91b5836fb43b400b27bba3213b854da5f99bc4ffa6c3dd312ebf7115e006b51\" returns successfully" Jan 13 20:31:22.483465 kubelet[2613]: E0113 20:31:22.483420 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xmsjq" podUID="d62b149c-90ef-4582-bf5b-b3dad659f453" Jan 13 20:31:22.553464 kubelet[2613]: E0113 20:31:22.553430 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:22.562636 kubelet[2613]: I0113 20:31:22.562592 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-85d5fd46b5-9pcmj" podStartSLOduration=1.424008999 podStartE2EDuration="2.56254108s" podCreationTimestamp="2025-01-13 20:31:20 +0000 UTC" firstStartedPulling="2025-01-13 20:31:21.22042916 +0000 UTC m=+19.831882146" lastFinishedPulling="2025-01-13 20:31:22.358961241 +0000 UTC m=+20.970414227" observedRunningTime="2025-01-13 20:31:22.561450032 +0000 UTC m=+21.172903018" watchObservedRunningTime="2025-01-13 20:31:22.56254108 +0000 UTC m=+21.173994066" Jan 13 20:31:22.601927 kubelet[2613]: E0113 20:31:22.601802 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.601927 kubelet[2613]: W0113 20:31:22.601827 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.601927 kubelet[2613]: E0113 20:31:22.601849 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.602186 kubelet[2613]: E0113 20:31:22.602173 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.602337 kubelet[2613]: W0113 20:31:22.602238 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.602337 kubelet[2613]: E0113 20:31:22.602257 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.602506 kubelet[2613]: E0113 20:31:22.602493 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.602566 kubelet[2613]: W0113 20:31:22.602555 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.602642 kubelet[2613]: E0113 20:31:22.602632 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.602948 kubelet[2613]: E0113 20:31:22.602846 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.602948 kubelet[2613]: W0113 20:31:22.602859 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.602948 kubelet[2613]: E0113 20:31:22.602871 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.603112 kubelet[2613]: E0113 20:31:22.603100 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.603178 kubelet[2613]: W0113 20:31:22.603157 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.603249 kubelet[2613]: E0113 20:31:22.603238 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.603617 kubelet[2613]: E0113 20:31:22.603508 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.603617 kubelet[2613]: W0113 20:31:22.603521 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.603617 kubelet[2613]: E0113 20:31:22.603535 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.603792 kubelet[2613]: E0113 20:31:22.603778 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.603852 kubelet[2613]: W0113 20:31:22.603841 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.603910 kubelet[2613]: E0113 20:31:22.603901 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.604200 kubelet[2613]: E0113 20:31:22.604102 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.604200 kubelet[2613]: W0113 20:31:22.604114 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.604200 kubelet[2613]: E0113 20:31:22.604125 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.604418 kubelet[2613]: E0113 20:31:22.604347 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.604581 kubelet[2613]: W0113 20:31:22.604473 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.604581 kubelet[2613]: E0113 20:31:22.604492 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.604846 kubelet[2613]: E0113 20:31:22.604830 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.606250 kubelet[2613]: W0113 20:31:22.606231 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.606829 kubelet[2613]: E0113 20:31:22.606460 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.606990 kubelet[2613]: E0113 20:31:22.606974 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.607057 kubelet[2613]: W0113 20:31:22.607044 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.607202 kubelet[2613]: E0113 20:31:22.607107 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.608137 kubelet[2613]: E0113 20:31:22.607995 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.608137 kubelet[2613]: W0113 20:31:22.608010 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.608137 kubelet[2613]: E0113 20:31:22.608024 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.610648 kubelet[2613]: E0113 20:31:22.609204 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.610648 kubelet[2613]: W0113 20:31:22.609216 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.610648 kubelet[2613]: E0113 20:31:22.609229 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.611626 kubelet[2613]: E0113 20:31:22.610790 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.611840 kubelet[2613]: W0113 20:31:22.610806 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.611840 kubelet[2613]: E0113 20:31:22.611747 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.611992 kubelet[2613]: E0113 20:31:22.611981 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.612204 kubelet[2613]: W0113 20:31:22.612044 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.612204 kubelet[2613]: E0113 20:31:22.612062 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.612470 kubelet[2613]: E0113 20:31:22.612458 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.612547 kubelet[2613]: W0113 20:31:22.612536 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.612735 kubelet[2613]: E0113 20:31:22.612659 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.613098 kubelet[2613]: E0113 20:31:22.612980 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.613098 kubelet[2613]: W0113 20:31:22.612993 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.613098 kubelet[2613]: E0113 20:31:22.613009 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.613278 kubelet[2613]: E0113 20:31:22.613265 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.613388 kubelet[2613]: W0113 20:31:22.613364 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.613530 kubelet[2613]: E0113 20:31:22.613471 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.613861 kubelet[2613]: E0113 20:31:22.613793 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.613861 kubelet[2613]: W0113 20:31:22.613805 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.613861 kubelet[2613]: E0113 20:31:22.613821 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.614295 kubelet[2613]: E0113 20:31:22.614223 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.614295 kubelet[2613]: W0113 20:31:22.614236 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.614295 kubelet[2613]: E0113 20:31:22.614258 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.614744 kubelet[2613]: E0113 20:31:22.614638 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.614744 kubelet[2613]: W0113 20:31:22.614652 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.614744 kubelet[2613]: E0113 20:31:22.614671 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.615117 kubelet[2613]: E0113 20:31:22.615014 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.615117 kubelet[2613]: W0113 20:31:22.615026 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.615475 kubelet[2613]: E0113 20:31:22.615237 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.615617 kubelet[2613]: E0113 20:31:22.615601 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.615685 kubelet[2613]: W0113 20:31:22.615674 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.615843 kubelet[2613]: E0113 20:31:22.615829 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.616224 kubelet[2613]: E0113 20:31:22.615971 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.616461 kubelet[2613]: W0113 20:31:22.616322 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.616461 kubelet[2613]: E0113 20:31:22.616409 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.616648 kubelet[2613]: E0113 20:31:22.616636 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.616714 kubelet[2613]: W0113 20:31:22.616702 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.616825 kubelet[2613]: E0113 20:31:22.616767 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.617084 kubelet[2613]: E0113 20:31:22.617069 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.617228 kubelet[2613]: W0113 20:31:22.617155 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.617228 kubelet[2613]: E0113 20:31:22.617188 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.617627 kubelet[2613]: E0113 20:31:22.617509 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.617627 kubelet[2613]: W0113 20:31:22.617523 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.617627 kubelet[2613]: E0113 20:31:22.617544 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.617893 kubelet[2613]: E0113 20:31:22.617879 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.618007 kubelet[2613]: W0113 20:31:22.617955 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.618254 kubelet[2613]: E0113 20:31:22.618061 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.618505 kubelet[2613]: E0113 20:31:22.618491 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.618599 kubelet[2613]: W0113 20:31:22.618576 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.618671 kubelet[2613]: E0113 20:31:22.618661 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.618924 kubelet[2613]: E0113 20:31:22.618910 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.619008 kubelet[2613]: W0113 20:31:22.618991 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.619127 kubelet[2613]: E0113 20:31:22.619073 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.619450 kubelet[2613]: E0113 20:31:22.619436 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.619917 kubelet[2613]: W0113 20:31:22.619525 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.619917 kubelet[2613]: E0113 20:31:22.619558 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.619917 kubelet[2613]: E0113 20:31:22.619763 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.619917 kubelet[2613]: W0113 20:31:22.619778 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.619917 kubelet[2613]: E0113 20:31:22.619793 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:22.620164 kubelet[2613]: E0113 20:31:22.620151 2613 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:31:22.620232 kubelet[2613]: W0113 20:31:22.620219 2613 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:31:22.620301 kubelet[2613]: E0113 20:31:22.620289 2613 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:31:23.282679 containerd[1444]: time="2025-01-13T20:31:23.282612679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:23.283354 containerd[1444]: time="2025-01-13T20:31:23.283305186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 13 20:31:23.283841 containerd[1444]: time="2025-01-13T20:31:23.283804373Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:23.285858 containerd[1444]: time="2025-01-13T20:31:23.285824883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:23.286710 containerd[1444]: time="2025-01-13T20:31:23.286455337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 925.012893ms" Jan 13 20:31:23.286710 containerd[1444]: time="2025-01-13T20:31:23.286484023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 13 20:31:23.288261 containerd[1444]: time="2025-01-13T20:31:23.288225434Z" level=info msg="CreateContainer within sandbox \"60132f54cafd53910002c82a78e2483c8463da594266668a045c92323a0a829d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:31:23.300688 containerd[1444]: time="2025-01-13T20:31:23.300648439Z" level=info msg="CreateContainer within sandbox \"60132f54cafd53910002c82a78e2483c8463da594266668a045c92323a0a829d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"55855cb58866c066ebc03c515ba21770d3ea1059237d838c59be2dbb036f40b3\"" Jan 13 20:31:23.301332 containerd[1444]: time="2025-01-13T20:31:23.301273813Z" level=info msg="StartContainer for \"55855cb58866c066ebc03c515ba21770d3ea1059237d838c59be2dbb036f40b3\"" Jan 13 20:31:23.333561 systemd[1]: Started cri-containerd-55855cb58866c066ebc03c515ba21770d3ea1059237d838c59be2dbb036f40b3.scope - libcontainer container 55855cb58866c066ebc03c515ba21770d3ea1059237d838c59be2dbb036f40b3. Jan 13 20:31:23.388840 systemd[1]: cri-containerd-55855cb58866c066ebc03c515ba21770d3ea1059237d838c59be2dbb036f40b3.scope: Deactivated successfully. Jan 13 20:31:23.412388 containerd[1444]: time="2025-01-13T20:31:23.412317658Z" level=info msg="StartContainer for \"55855cb58866c066ebc03c515ba21770d3ea1059237d838c59be2dbb036f40b3\" returns successfully" Jan 13 20:31:23.438642 containerd[1444]: time="2025-01-13T20:31:23.432237260Z" level=info msg="shim disconnected" id=55855cb58866c066ebc03c515ba21770d3ea1059237d838c59be2dbb036f40b3 namespace=k8s.io Jan 13 20:31:23.438824 containerd[1444]: time="2025-01-13T20:31:23.438656947Z" level=warning msg="cleaning up after shim disconnected" id=55855cb58866c066ebc03c515ba21770d3ea1059237d838c59be2dbb036f40b3 namespace=k8s.io Jan 13 20:31:23.438824 containerd[1444]: time="2025-01-13T20:31:23.438675070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:31:23.555614 kubelet[2613]: E0113 20:31:23.555510 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:23.556664 kubelet[2613]: I0113 20:31:23.556645 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:31:23.556940 containerd[1444]: time="2025-01-13T20:31:23.556911927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:31:23.559042 kubelet[2613]: E0113 20:31:23.558993 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:24.003489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55855cb58866c066ebc03c515ba21770d3ea1059237d838c59be2dbb036f40b3-rootfs.mount: Deactivated successfully. Jan 13 20:31:24.483249 kubelet[2613]: E0113 20:31:24.483126 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xmsjq" podUID="d62b149c-90ef-4582-bf5b-b3dad659f453" Jan 13 20:31:26.482958 kubelet[2613]: E0113 20:31:26.482917 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xmsjq" podUID="d62b149c-90ef-4582-bf5b-b3dad659f453" Jan 13 20:31:26.617880 containerd[1444]: time="2025-01-13T20:31:26.617829870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:26.618673 containerd[1444]: time="2025-01-13T20:31:26.618464262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 13 20:31:26.620215 containerd[1444]: time="2025-01-13T20:31:26.620165080Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:26.622308 containerd[1444]: time="2025-01-13T20:31:26.622280051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:26.623445 containerd[1444]: time="2025-01-13T20:31:26.622931005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.06597963s" Jan 13 20:31:26.623445 containerd[1444]: time="2025-01-13T20:31:26.622963371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 13 20:31:26.625182 containerd[1444]: time="2025-01-13T20:31:26.625150515Z" level=info msg="CreateContainer within sandbox \"60132f54cafd53910002c82a78e2483c8463da594266668a045c92323a0a829d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:31:26.642151 containerd[1444]: time="2025-01-13T20:31:26.642096568Z" level=info msg="CreateContainer within sandbox \"60132f54cafd53910002c82a78e2483c8463da594266668a045c92323a0a829d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c11c6da05d5039f94b044c78fc20e71428c31e22fa3f430589d13e8146ce96ee\"" Jan 13 20:31:26.642798 containerd[1444]: time="2025-01-13T20:31:26.642770246Z" level=info msg="StartContainer for \"c11c6da05d5039f94b044c78fc20e71428c31e22fa3f430589d13e8146ce96ee\"" Jan 13 20:31:26.675582 systemd[1]: Started cri-containerd-c11c6da05d5039f94b044c78fc20e71428c31e22fa3f430589d13e8146ce96ee.scope - libcontainer container c11c6da05d5039f94b044c78fc20e71428c31e22fa3f430589d13e8146ce96ee. Jan 13 20:31:26.759983 containerd[1444]: time="2025-01-13T20:31:26.759935764Z" level=info msg="StartContainer for \"c11c6da05d5039f94b044c78fc20e71428c31e22fa3f430589d13e8146ce96ee\" returns successfully" Jan 13 20:31:27.231877 systemd[1]: cri-containerd-c11c6da05d5039f94b044c78fc20e71428c31e22fa3f430589d13e8146ce96ee.scope: Deactivated successfully. Jan 13 20:31:27.250304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c11c6da05d5039f94b044c78fc20e71428c31e22fa3f430589d13e8146ce96ee-rootfs.mount: Deactivated successfully. Jan 13 20:31:27.256423 containerd[1444]: time="2025-01-13T20:31:27.256354420Z" level=info msg="shim disconnected" id=c11c6da05d5039f94b044c78fc20e71428c31e22fa3f430589d13e8146ce96ee namespace=k8s.io Jan 13 20:31:27.256423 containerd[1444]: time="2025-01-13T20:31:27.256421511Z" level=warning msg="cleaning up after shim disconnected" id=c11c6da05d5039f94b044c78fc20e71428c31e22fa3f430589d13e8146ce96ee namespace=k8s.io Jan 13 20:31:27.256587 containerd[1444]: time="2025-01-13T20:31:27.256430753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:31:27.290373 kubelet[2613]: I0113 20:31:27.290259 2613 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:31:27.316590 kubelet[2613]: I0113 20:31:27.316115 2613 topology_manager.go:215] "Topology Admit Handler" podUID="1cfacfd0-9476-421d-9ba4-8948bbbe88e8" podNamespace="calico-apiserver" podName="calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:27.319706 kubelet[2613]: I0113 20:31:27.319557 2613 topology_manager.go:215] "Topology Admit Handler" podUID="ef49bc1a-4bb1-4428-954b-8600a024bc5a" podNamespace="calico-apiserver" podName="calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:27.321013 kubelet[2613]: I0113 20:31:27.320987 2613 topology_manager.go:215] "Topology Admit Handler" podUID="e9fd979c-1ebe-4de8-a229-23c188a43678" podNamespace="kube-system" podName="coredns-76f75df574-hlvx2" Jan 13 20:31:27.322180 kubelet[2613]: I0113 20:31:27.322138 2613 topology_manager.go:215] "Topology Admit Handler" podUID="998cfadf-febb-495f-927c-5b5b4a548933" podNamespace="kube-system" podName="coredns-76f75df574-v72tl" Jan 13 20:31:27.323773 kubelet[2613]: I0113 20:31:27.323745 2613 topology_manager.go:215] "Topology Admit Handler" podUID="b42b7cb4-9adc-45b0-a43d-a62a51c30a4e" podNamespace="calico-system" podName="calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:27.329085 systemd[1]: Created slice kubepods-besteffort-pod1cfacfd0_9476_421d_9ba4_8948bbbe88e8.slice - libcontainer container kubepods-besteffort-pod1cfacfd0_9476_421d_9ba4_8948bbbe88e8.slice. Jan 13 20:31:27.355653 kubelet[2613]: I0113 20:31:27.355625 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-665c4\" (UniqueName: \"kubernetes.io/projected/e9fd979c-1ebe-4de8-a229-23c188a43678-kube-api-access-665c4\") pod \"coredns-76f75df574-hlvx2\" (UID: \"e9fd979c-1ebe-4de8-a229-23c188a43678\") " pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:27.356094 kubelet[2613]: I0113 20:31:27.355804 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b42b7cb4-9adc-45b0-a43d-a62a51c30a4e-tigera-ca-bundle\") pod \"calico-kube-controllers-85d666cdf5-4jgmf\" (UID: \"b42b7cb4-9adc-45b0-a43d-a62a51c30a4e\") " pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:27.356094 kubelet[2613]: I0113 20:31:27.355838 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1cfacfd0-9476-421d-9ba4-8948bbbe88e8-calico-apiserver-certs\") pod \"calico-apiserver-5d7f47bd54-kjr8q\" (UID: \"1cfacfd0-9476-421d-9ba4-8948bbbe88e8\") " pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:27.356094 kubelet[2613]: I0113 20:31:27.355865 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75xjs\" (UniqueName: \"kubernetes.io/projected/ef49bc1a-4bb1-4428-954b-8600a024bc5a-kube-api-access-75xjs\") pod \"calico-apiserver-5d7f47bd54-dc6xn\" (UID: \"ef49bc1a-4bb1-4428-954b-8600a024bc5a\") " pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:27.356094 kubelet[2613]: I0113 20:31:27.355890 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/998cfadf-febb-495f-927c-5b5b4a548933-config-volume\") pod \"coredns-76f75df574-v72tl\" (UID: \"998cfadf-febb-495f-927c-5b5b4a548933\") " pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:27.356094 kubelet[2613]: I0113 20:31:27.355913 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwv5z\" (UniqueName: \"kubernetes.io/projected/998cfadf-febb-495f-927c-5b5b4a548933-kube-api-access-jwv5z\") pod \"coredns-76f75df574-v72tl\" (UID: \"998cfadf-febb-495f-927c-5b5b4a548933\") " pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:27.356246 kubelet[2613]: I0113 20:31:27.355933 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ef49bc1a-4bb1-4428-954b-8600a024bc5a-calico-apiserver-certs\") pod \"calico-apiserver-5d7f47bd54-dc6xn\" (UID: \"ef49bc1a-4bb1-4428-954b-8600a024bc5a\") " pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:27.356246 kubelet[2613]: I0113 20:31:27.355955 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbwgt\" (UniqueName: \"kubernetes.io/projected/1cfacfd0-9476-421d-9ba4-8948bbbe88e8-kube-api-access-jbwgt\") pod \"calico-apiserver-5d7f47bd54-kjr8q\" (UID: \"1cfacfd0-9476-421d-9ba4-8948bbbe88e8\") " pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:27.356246 kubelet[2613]: I0113 20:31:27.355976 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9fd979c-1ebe-4de8-a229-23c188a43678-config-volume\") pod \"coredns-76f75df574-hlvx2\" (UID: \"e9fd979c-1ebe-4de8-a229-23c188a43678\") " pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:27.356246 kubelet[2613]: I0113 20:31:27.355998 2613 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx5s6\" (UniqueName: \"kubernetes.io/projected/b42b7cb4-9adc-45b0-a43d-a62a51c30a4e-kube-api-access-sx5s6\") pod \"calico-kube-controllers-85d666cdf5-4jgmf\" (UID: \"b42b7cb4-9adc-45b0-a43d-a62a51c30a4e\") " pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:27.360370 systemd[1]: Created slice kubepods-besteffort-podef49bc1a_4bb1_4428_954b_8600a024bc5a.slice - libcontainer container kubepods-besteffort-podef49bc1a_4bb1_4428_954b_8600a024bc5a.slice. Jan 13 20:31:27.366756 systemd[1]: Created slice kubepods-burstable-pode9fd979c_1ebe_4de8_a229_23c188a43678.slice - libcontainer container kubepods-burstable-pode9fd979c_1ebe_4de8_a229_23c188a43678.slice. Jan 13 20:31:27.374331 systemd[1]: Created slice kubepods-burstable-pod998cfadf_febb_495f_927c_5b5b4a548933.slice - libcontainer container kubepods-burstable-pod998cfadf_febb_495f_927c_5b5b4a548933.slice. Jan 13 20:31:27.380432 systemd[1]: Created slice kubepods-besteffort-podb42b7cb4_9adc_45b0_a43d_a62a51c30a4e.slice - libcontainer container kubepods-besteffort-podb42b7cb4_9adc_45b0_a43d_a62a51c30a4e.slice. Jan 13 20:31:27.566043 kubelet[2613]: E0113 20:31:27.566012 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:27.566769 containerd[1444]: time="2025-01-13T20:31:27.566723035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:31:27.655075 containerd[1444]: time="2025-01-13T20:31:27.655036562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:31:27.664243 containerd[1444]: time="2025-01-13T20:31:27.664192188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:31:27.671960 kubelet[2613]: E0113 20:31:27.671912 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:27.672766 containerd[1444]: time="2025-01-13T20:31:27.672516117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:0,}" Jan 13 20:31:27.677960 kubelet[2613]: E0113 20:31:27.677842 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:27.678499 containerd[1444]: time="2025-01-13T20:31:27.678298989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:0,}" Jan 13 20:31:27.684361 containerd[1444]: time="2025-01-13T20:31:27.684319819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:0,}" Jan 13 20:31:28.156712 containerd[1444]: time="2025-01-13T20:31:28.155243741Z" level=error msg="Failed to destroy network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.156976 containerd[1444]: time="2025-01-13T20:31:28.156062067Z" level=error msg="Failed to destroy network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.162622 containerd[1444]: time="2025-01-13T20:31:28.162569991Z" level=error msg="encountered an error cleaning up failed sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.162698 containerd[1444]: time="2025-01-13T20:31:28.162665766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.164166 containerd[1444]: time="2025-01-13T20:31:28.164117630Z" level=error msg="encountered an error cleaning up failed sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.164233 containerd[1444]: time="2025-01-13T20:31:28.164196122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.165157 containerd[1444]: time="2025-01-13T20:31:28.165119424Z" level=error msg="Failed to destroy network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.165867 containerd[1444]: time="2025-01-13T20:31:28.165828333Z" level=error msg="encountered an error cleaning up failed sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.165912 containerd[1444]: time="2025-01-13T20:31:28.165882462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.166606 kubelet[2613]: E0113 20:31:28.166389 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.166606 kubelet[2613]: E0113 20:31:28.166416 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.166606 kubelet[2613]: E0113 20:31:28.166472 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:28.166606 kubelet[2613]: E0113 20:31:28.166375 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.166973 kubelet[2613]: E0113 20:31:28.166494 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:28.166973 kubelet[2613]: E0113 20:31:28.166502 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:28.166973 kubelet[2613]: E0113 20:31:28.166520 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:28.166973 kubelet[2613]: E0113 20:31:28.166473 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:28.167072 kubelet[2613]: E0113 20:31:28.166562 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-v72tl_kube-system(998cfadf-febb-495f-927c-5b5b4a548933)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-v72tl_kube-system(998cfadf-febb-495f-927c-5b5b4a548933)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-v72tl" podUID="998cfadf-febb-495f-927c-5b5b4a548933" Jan 13 20:31:28.167072 kubelet[2613]: E0113 20:31:28.166584 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:28.167072 kubelet[2613]: E0113 20:31:28.166640 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f47bd54-dc6xn_calico-apiserver(ef49bc1a-4bb1-4428-954b-8600a024bc5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f47bd54-dc6xn_calico-apiserver(ef49bc1a-4bb1-4428-954b-8600a024bc5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" podUID="ef49bc1a-4bb1-4428-954b-8600a024bc5a" Jan 13 20:31:28.167184 kubelet[2613]: E0113 20:31:28.166547 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hlvx2_kube-system(e9fd979c-1ebe-4de8-a229-23c188a43678)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hlvx2_kube-system(e9fd979c-1ebe-4de8-a229-23c188a43678)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hlvx2" podUID="e9fd979c-1ebe-4de8-a229-23c188a43678" Jan 13 20:31:28.177284 containerd[1444]: time="2025-01-13T20:31:28.177143238Z" level=error msg="Failed to destroy network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.178355 containerd[1444]: time="2025-01-13T20:31:28.177647196Z" level=error msg="encountered an error cleaning up failed sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.178355 containerd[1444]: time="2025-01-13T20:31:28.177706005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.178655 kubelet[2613]: E0113 20:31:28.178616 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.178732 kubelet[2613]: E0113 20:31:28.178680 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:28.178732 kubelet[2613]: E0113 20:31:28.178701 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:28.178835 kubelet[2613]: E0113 20:31:28.178754 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85d666cdf5-4jgmf_calico-system(b42b7cb4-9adc-45b0-a43d-a62a51c30a4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85d666cdf5-4jgmf_calico-system(b42b7cb4-9adc-45b0-a43d-a62a51c30a4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" podUID="b42b7cb4-9adc-45b0-a43d-a62a51c30a4e" Jan 13 20:31:28.180307 containerd[1444]: time="2025-01-13T20:31:28.180233915Z" level=error msg="Failed to destroy network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.180635 containerd[1444]: time="2025-01-13T20:31:28.180578408Z" level=error msg="encountered an error cleaning up failed sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.180770 containerd[1444]: time="2025-01-13T20:31:28.180647139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.181067 kubelet[2613]: E0113 20:31:28.180927 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.181067 kubelet[2613]: E0113 20:31:28.180971 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:28.181067 kubelet[2613]: E0113 20:31:28.180989 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:28.181171 kubelet[2613]: E0113 20:31:28.181039 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f47bd54-kjr8q_calico-apiserver(1cfacfd0-9476-421d-9ba4-8948bbbe88e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f47bd54-kjr8q_calico-apiserver(1cfacfd0-9476-421d-9ba4-8948bbbe88e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" podUID="1cfacfd0-9476-421d-9ba4-8948bbbe88e8" Jan 13 20:31:28.490423 systemd[1]: Created slice kubepods-besteffort-podd62b149c_90ef_4582_bf5b_b3dad659f453.slice - libcontainer container kubepods-besteffort-podd62b149c_90ef_4582_bf5b_b3dad659f453.slice. Jan 13 20:31:28.494140 containerd[1444]: time="2025-01-13T20:31:28.493909289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:0,}" Jan 13 20:31:28.556862 containerd[1444]: time="2025-01-13T20:31:28.556652245Z" level=error msg="Failed to destroy network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.557166 containerd[1444]: time="2025-01-13T20:31:28.557138960Z" level=error msg="encountered an error cleaning up failed sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.557225 containerd[1444]: time="2025-01-13T20:31:28.557203170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.557499 kubelet[2613]: E0113 20:31:28.557470 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.557552 kubelet[2613]: E0113 20:31:28.557527 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:28.557552 kubelet[2613]: E0113 20:31:28.557546 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:28.557612 kubelet[2613]: E0113 20:31:28.557603 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xmsjq_calico-system(d62b149c-90ef-4582-bf5b-b3dad659f453)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xmsjq_calico-system(d62b149c-90ef-4582-bf5b-b3dad659f453)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xmsjq" podUID="d62b149c-90ef-4582-bf5b-b3dad659f453" Jan 13 20:31:28.569645 kubelet[2613]: I0113 20:31:28.568463 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea" Jan 13 20:31:28.570526 containerd[1444]: time="2025-01-13T20:31:28.570201015Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\"" Jan 13 20:31:28.570526 containerd[1444]: time="2025-01-13T20:31:28.570372321Z" level=info msg="Ensure that sandbox 9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea in task-service has been cleanup successfully" Jan 13 20:31:28.570791 containerd[1444]: time="2025-01-13T20:31:28.570707653Z" level=info msg="TearDown network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" successfully" Jan 13 20:31:28.570791 containerd[1444]: time="2025-01-13T20:31:28.570729296Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" returns successfully" Jan 13 20:31:28.571581 containerd[1444]: time="2025-01-13T20:31:28.571545302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:1,}" Jan 13 20:31:28.572080 kubelet[2613]: I0113 20:31:28.572052 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688" Jan 13 20:31:28.572742 containerd[1444]: time="2025-01-13T20:31:28.572719843Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\"" Jan 13 20:31:28.572884 containerd[1444]: time="2025-01-13T20:31:28.572865586Z" level=info msg="Ensure that sandbox 201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688 in task-service has been cleanup successfully" Jan 13 20:31:28.573413 kubelet[2613]: I0113 20:31:28.573286 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e" Jan 13 20:31:28.573828 containerd[1444]: time="2025-01-13T20:31:28.573801050Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\"" Jan 13 20:31:28.574100 containerd[1444]: time="2025-01-13T20:31:28.574075852Z" level=info msg="Ensure that sandbox 5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e in task-service has been cleanup successfully" Jan 13 20:31:28.575058 containerd[1444]: time="2025-01-13T20:31:28.575020358Z" level=info msg="TearDown network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" successfully" Jan 13 20:31:28.575058 containerd[1444]: time="2025-01-13T20:31:28.575049322Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" returns successfully" Jan 13 20:31:28.575134 containerd[1444]: time="2025-01-13T20:31:28.575086448Z" level=info msg="TearDown network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" successfully" Jan 13 20:31:28.575134 containerd[1444]: time="2025-01-13T20:31:28.575097610Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" returns successfully" Jan 13 20:31:28.576199 kubelet[2613]: E0113 20:31:28.576180 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:28.576263 kubelet[2613]: E0113 20:31:28.576180 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:28.576726 containerd[1444]: time="2025-01-13T20:31:28.576524390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:1,}" Jan 13 20:31:28.576780 kubelet[2613]: I0113 20:31:28.576653 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14" Jan 13 20:31:28.576915 containerd[1444]: time="2025-01-13T20:31:28.576726981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:1,}" Jan 13 20:31:28.577544 containerd[1444]: time="2025-01-13T20:31:28.577521544Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\"" Jan 13 20:31:28.577674 containerd[1444]: time="2025-01-13T20:31:28.577656244Z" level=info msg="Ensure that sandbox 0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14 in task-service has been cleanup successfully" Jan 13 20:31:28.579311 containerd[1444]: time="2025-01-13T20:31:28.577871918Z" level=info msg="TearDown network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" successfully" Jan 13 20:31:28.579311 containerd[1444]: time="2025-01-13T20:31:28.577894281Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" returns successfully" Jan 13 20:31:28.579311 containerd[1444]: time="2025-01-13T20:31:28.578448287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:31:28.579441 kubelet[2613]: I0113 20:31:28.578834 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08" Jan 13 20:31:28.579476 containerd[1444]: time="2025-01-13T20:31:28.579442160Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\"" Jan 13 20:31:28.579596 containerd[1444]: time="2025-01-13T20:31:28.579576741Z" level=info msg="Ensure that sandbox 5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08 in task-service has been cleanup successfully" Jan 13 20:31:28.579921 kubelet[2613]: I0113 20:31:28.579900 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531" Jan 13 20:31:28.580441 containerd[1444]: time="2025-01-13T20:31:28.580395147Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\"" Jan 13 20:31:28.580823 containerd[1444]: time="2025-01-13T20:31:28.580619461Z" level=info msg="TearDown network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" successfully" Jan 13 20:31:28.580823 containerd[1444]: time="2025-01-13T20:31:28.580673510Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" returns successfully" Jan 13 20:31:28.580823 containerd[1444]: time="2025-01-13T20:31:28.580642745Z" level=info msg="Ensure that sandbox 45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531 in task-service has been cleanup successfully" Jan 13 20:31:28.580937 containerd[1444]: time="2025-01-13T20:31:28.580882062Z" level=info msg="TearDown network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" successfully" Jan 13 20:31:28.580937 containerd[1444]: time="2025-01-13T20:31:28.580899345Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" returns successfully" Jan 13 20:31:28.581192 containerd[1444]: time="2025-01-13T20:31:28.581164625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:31:28.581534 containerd[1444]: time="2025-01-13T20:31:28.581266841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:1,}" Jan 13 20:31:28.638962 systemd[1]: run-netns-cni\x2d0fbf11b0\x2d6633\x2df22b\x2de875\x2d57e1b88b0468.mount: Deactivated successfully. Jan 13 20:31:28.639465 systemd[1]: run-netns-cni\x2dd906af40\x2d402f\x2deaa3\x2da735\x2d275070d79dd2.mount: Deactivated successfully. Jan 13 20:31:28.639673 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08-shm.mount: Deactivated successfully. Jan 13 20:31:28.639848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14-shm.mount: Deactivated successfully. Jan 13 20:31:28.834583 containerd[1444]: time="2025-01-13T20:31:28.834534580Z" level=error msg="Failed to destroy network for sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.835351 containerd[1444]: time="2025-01-13T20:31:28.835317580Z" level=error msg="encountered an error cleaning up failed sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.835529 containerd[1444]: time="2025-01-13T20:31:28.835504849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.835876 kubelet[2613]: E0113 20:31:28.835841 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.835971 kubelet[2613]: E0113 20:31:28.835904 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:28.835971 kubelet[2613]: E0113 20:31:28.835930 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:28.836022 kubelet[2613]: E0113 20:31:28.835981 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85d666cdf5-4jgmf_calico-system(b42b7cb4-9adc-45b0-a43d-a62a51c30a4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85d666cdf5-4jgmf_calico-system(b42b7cb4-9adc-45b0-a43d-a62a51c30a4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" podUID="b42b7cb4-9adc-45b0-a43d-a62a51c30a4e" Jan 13 20:31:28.868272 containerd[1444]: time="2025-01-13T20:31:28.868221495Z" level=error msg="Failed to destroy network for sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.868827 containerd[1444]: time="2025-01-13T20:31:28.868784661Z" level=error msg="encountered an error cleaning up failed sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.868898 containerd[1444]: time="2025-01-13T20:31:28.868852312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.869451 kubelet[2613]: E0113 20:31:28.869359 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.869546 kubelet[2613]: E0113 20:31:28.869482 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:28.869546 kubelet[2613]: E0113 20:31:28.869510 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:28.869900 kubelet[2613]: E0113 20:31:28.869866 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hlvx2_kube-system(e9fd979c-1ebe-4de8-a229-23c188a43678)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hlvx2_kube-system(e9fd979c-1ebe-4de8-a229-23c188a43678)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hlvx2" podUID="e9fd979c-1ebe-4de8-a229-23c188a43678" Jan 13 20:31:28.877431 containerd[1444]: time="2025-01-13T20:31:28.876314383Z" level=error msg="Failed to destroy network for sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.881715 containerd[1444]: time="2025-01-13T20:31:28.881661167Z" level=error msg="Failed to destroy network for sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.881972 containerd[1444]: time="2025-01-13T20:31:28.881666128Z" level=error msg="Failed to destroy network for sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.882031 containerd[1444]: time="2025-01-13T20:31:28.881990578Z" level=error msg="encountered an error cleaning up failed sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.882064 containerd[1444]: time="2025-01-13T20:31:28.882047347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.882403 containerd[1444]: time="2025-01-13T20:31:28.882251178Z" level=error msg="encountered an error cleaning up failed sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.882403 containerd[1444]: time="2025-01-13T20:31:28.882300746Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.882528 kubelet[2613]: E0113 20:31:28.882273 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.882528 kubelet[2613]: E0113 20:31:28.882327 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:28.882528 kubelet[2613]: E0113 20:31:28.882468 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.882528 kubelet[2613]: E0113 20:31:28.882506 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:28.882668 kubelet[2613]: E0113 20:31:28.882525 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:28.882668 kubelet[2613]: E0113 20:31:28.882577 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f47bd54-dc6xn_calico-apiserver(ef49bc1a-4bb1-4428-954b-8600a024bc5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f47bd54-dc6xn_calico-apiserver(ef49bc1a-4bb1-4428-954b-8600a024bc5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" podUID="ef49bc1a-4bb1-4428-954b-8600a024bc5a" Jan 13 20:31:28.882905 kubelet[2613]: E0113 20:31:28.882877 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:28.882963 kubelet[2613]: E0113 20:31:28.882941 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xmsjq_calico-system(d62b149c-90ef-4582-bf5b-b3dad659f453)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xmsjq_calico-system(d62b149c-90ef-4582-bf5b-b3dad659f453)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xmsjq" podUID="d62b149c-90ef-4582-bf5b-b3dad659f453" Jan 13 20:31:28.886874 containerd[1444]: time="2025-01-13T20:31:28.886662939Z" level=error msg="encountered an error cleaning up failed sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.886874 containerd[1444]: time="2025-01-13T20:31:28.886772956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.887595 kubelet[2613]: E0113 20:31:28.887566 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.887925 kubelet[2613]: E0113 20:31:28.887899 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:28.887999 kubelet[2613]: E0113 20:31:28.887937 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:28.888032 kubelet[2613]: E0113 20:31:28.888004 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f47bd54-kjr8q_calico-apiserver(1cfacfd0-9476-421d-9ba4-8948bbbe88e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f47bd54-kjr8q_calico-apiserver(1cfacfd0-9476-421d-9ba4-8948bbbe88e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" podUID="1cfacfd0-9476-421d-9ba4-8948bbbe88e8" Jan 13 20:31:28.907075 containerd[1444]: time="2025-01-13T20:31:28.907022038Z" level=error msg="Failed to destroy network for sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.921766 containerd[1444]: time="2025-01-13T20:31:28.921676418Z" level=error msg="encountered an error cleaning up failed sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.921915 containerd[1444]: time="2025-01-13T20:31:28.921793956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.922052 kubelet[2613]: E0113 20:31:28.922025 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:28.922113 kubelet[2613]: E0113 20:31:28.922083 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:28.922113 kubelet[2613]: E0113 20:31:28.922105 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:28.922171 kubelet[2613]: E0113 20:31:28.922153 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-v72tl_kube-system(998cfadf-febb-495f-927c-5b5b4a548933)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-v72tl_kube-system(998cfadf-febb-495f-927c-5b5b4a548933)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-v72tl" podUID="998cfadf-febb-495f-927c-5b5b4a548933" Jan 13 20:31:29.586397 kubelet[2613]: I0113 20:31:29.586354 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91" Jan 13 20:31:29.587508 containerd[1444]: time="2025-01-13T20:31:29.587320425Z" level=info msg="StopPodSandbox for \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\"" Jan 13 20:31:29.587508 containerd[1444]: time="2025-01-13T20:31:29.587854863Z" level=info msg="Ensure that sandbox b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91 in task-service has been cleanup successfully" Jan 13 20:31:29.587508 containerd[1444]: time="2025-01-13T20:31:29.588155466Z" level=info msg="TearDown network for sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" successfully" Jan 13 20:31:29.587508 containerd[1444]: time="2025-01-13T20:31:29.588173269Z" level=info msg="StopPodSandbox for \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" returns successfully" Jan 13 20:31:29.589309 containerd[1444]: time="2025-01-13T20:31:29.588602091Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\"" Jan 13 20:31:29.589309 containerd[1444]: time="2025-01-13T20:31:29.588690143Z" level=info msg="TearDown network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" successfully" Jan 13 20:31:29.589309 containerd[1444]: time="2025-01-13T20:31:29.588701465Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" returns successfully" Jan 13 20:31:29.589309 containerd[1444]: time="2025-01-13T20:31:29.589156531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:31:29.589434 kubelet[2613]: I0113 20:31:29.587830 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720" Jan 13 20:31:29.589434 kubelet[2613]: I0113 20:31:29.589307 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b" Jan 13 20:31:29.589990 containerd[1444]: time="2025-01-13T20:31:29.589750857Z" level=info msg="StopPodSandbox for \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\"" Jan 13 20:31:29.589990 containerd[1444]: time="2025-01-13T20:31:29.589898638Z" level=info msg="Ensure that sandbox 2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b in task-service has been cleanup successfully" Jan 13 20:31:29.590336 containerd[1444]: time="2025-01-13T20:31:29.590312018Z" level=info msg="StopPodSandbox for \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\"" Jan 13 20:31:29.590562 containerd[1444]: time="2025-01-13T20:31:29.590393630Z" level=info msg="TearDown network for sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" successfully" Jan 13 20:31:29.590562 containerd[1444]: time="2025-01-13T20:31:29.590414353Z" level=info msg="StopPodSandbox for \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" returns successfully" Jan 13 20:31:29.590625 containerd[1444]: time="2025-01-13T20:31:29.590596539Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\"" Jan 13 20:31:29.590684 containerd[1444]: time="2025-01-13T20:31:29.590657668Z" level=info msg="TearDown network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" successfully" Jan 13 20:31:29.591051 containerd[1444]: time="2025-01-13T20:31:29.590672430Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" returns successfully" Jan 13 20:31:29.591051 containerd[1444]: time="2025-01-13T20:31:29.591006798Z" level=info msg="Ensure that sandbox 8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720 in task-service has been cleanup successfully" Jan 13 20:31:29.591291 containerd[1444]: time="2025-01-13T20:31:29.591267996Z" level=info msg="TearDown network for sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" successfully" Jan 13 20:31:29.591522 containerd[1444]: time="2025-01-13T20:31:29.591288919Z" level=info msg="StopPodSandbox for \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" returns successfully" Jan 13 20:31:29.591936 containerd[1444]: time="2025-01-13T20:31:29.591881645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:2,}" Jan 13 20:31:29.592487 containerd[1444]: time="2025-01-13T20:31:29.591889246Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\"" Jan 13 20:31:29.592487 containerd[1444]: time="2025-01-13T20:31:29.592154004Z" level=info msg="TearDown network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" successfully" Jan 13 20:31:29.592590 kubelet[2613]: I0113 20:31:29.592095 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0" Jan 13 20:31:29.592905 containerd[1444]: time="2025-01-13T20:31:29.592164606Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" returns successfully" Jan 13 20:31:29.593587 containerd[1444]: time="2025-01-13T20:31:29.593016089Z" level=info msg="StopPodSandbox for \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\"" Jan 13 20:31:29.593587 containerd[1444]: time="2025-01-13T20:31:29.593521042Z" level=info msg="Ensure that sandbox af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0 in task-service has been cleanup successfully" Jan 13 20:31:29.594025 containerd[1444]: time="2025-01-13T20:31:29.593725031Z" level=info msg="TearDown network for sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" successfully" Jan 13 20:31:29.594025 containerd[1444]: time="2025-01-13T20:31:29.593744754Z" level=info msg="StopPodSandbox for \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" returns successfully" Jan 13 20:31:29.594565 containerd[1444]: time="2025-01-13T20:31:29.594544390Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\"" Jan 13 20:31:29.594629 containerd[1444]: time="2025-01-13T20:31:29.594608359Z" level=info msg="TearDown network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" successfully" Jan 13 20:31:29.594629 containerd[1444]: time="2025-01-13T20:31:29.594618961Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" returns successfully" Jan 13 20:31:29.594714 containerd[1444]: time="2025-01-13T20:31:29.594609399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:31:29.595433 containerd[1444]: time="2025-01-13T20:31:29.595404634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:2,}" Jan 13 20:31:29.595947 kubelet[2613]: I0113 20:31:29.595836 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2" Jan 13 20:31:29.597552 containerd[1444]: time="2025-01-13T20:31:29.597173010Z" level=info msg="StopPodSandbox for \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\"" Jan 13 20:31:29.597552 containerd[1444]: time="2025-01-13T20:31:29.597326912Z" level=info msg="Ensure that sandbox 9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2 in task-service has been cleanup successfully" Jan 13 20:31:29.597665 containerd[1444]: time="2025-01-13T20:31:29.597566547Z" level=info msg="TearDown network for sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" successfully" Jan 13 20:31:29.597665 containerd[1444]: time="2025-01-13T20:31:29.597585389Z" level=info msg="StopPodSandbox for \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" returns successfully" Jan 13 20:31:29.600301 kubelet[2613]: I0113 20:31:29.599868 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2" Jan 13 20:31:29.600584 containerd[1444]: time="2025-01-13T20:31:29.600552978Z" level=info msg="StopPodSandbox for \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\"" Jan 13 20:31:29.600769 containerd[1444]: time="2025-01-13T20:31:29.600750127Z" level=info msg="Ensure that sandbox 79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2 in task-service has been cleanup successfully" Jan 13 20:31:29.602106 containerd[1444]: time="2025-01-13T20:31:29.602068638Z" level=info msg="TearDown network for sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" successfully" Jan 13 20:31:29.602106 containerd[1444]: time="2025-01-13T20:31:29.602095161Z" level=info msg="StopPodSandbox for \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" returns successfully" Jan 13 20:31:29.603545 containerd[1444]: time="2025-01-13T20:31:29.603457798Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\"" Jan 13 20:31:29.603897 containerd[1444]: time="2025-01-13T20:31:29.603622262Z" level=info msg="TearDown network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" successfully" Jan 13 20:31:29.603897 containerd[1444]: time="2025-01-13T20:31:29.603638585Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" returns successfully" Jan 13 20:31:29.603979 kubelet[2613]: E0113 20:31:29.603825 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:29.605208 containerd[1444]: time="2025-01-13T20:31:29.605131961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:2,}" Jan 13 20:31:29.609114 containerd[1444]: time="2025-01-13T20:31:29.609075891Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\"" Jan 13 20:31:29.609187 containerd[1444]: time="2025-01-13T20:31:29.609166664Z" level=info msg="TearDown network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" successfully" Jan 13 20:31:29.609187 containerd[1444]: time="2025-01-13T20:31:29.609177865Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" returns successfully" Jan 13 20:31:29.609553 kubelet[2613]: E0113 20:31:29.609403 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:29.610105 containerd[1444]: time="2025-01-13T20:31:29.609885088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:2,}" Jan 13 20:31:29.647302 systemd[1]: run-netns-cni\x2d5584fb84\x2d9bd6\x2d55b7\x2d7d7a\x2d0e0a21906fa0.mount: Deactivated successfully. Jan 13 20:31:29.647409 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720-shm.mount: Deactivated successfully. Jan 13 20:31:29.647462 systemd[1]: run-netns-cni\x2d44434f84\x2d4eb4\x2d3a77\x2da55a\x2d33ad58a6b4c0.mount: Deactivated successfully. Jan 13 20:31:29.647510 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2-shm.mount: Deactivated successfully. Jan 13 20:31:29.647557 systemd[1]: run-netns-cni\x2d59c763eb\x2df147\x2d9311\x2d692d\x2d08d2fb248472.mount: Deactivated successfully. Jan 13 20:31:29.647600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b-shm.mount: Deactivated successfully. Jan 13 20:31:29.647645 systemd[1]: run-netns-cni\x2da670aaf9\x2d649b\x2deb86\x2de808\x2d8a54d44dc272.mount: Deactivated successfully. Jan 13 20:31:29.647690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91-shm.mount: Deactivated successfully. Jan 13 20:31:29.647737 systemd[1]: run-netns-cni\x2d1a191a09\x2d8927\x2d334e\x2dc081\x2ddc54706e192e.mount: Deactivated successfully. Jan 13 20:31:29.647780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0-shm.mount: Deactivated successfully. Jan 13 20:31:29.703662 containerd[1444]: time="2025-01-13T20:31:29.703589436Z" level=error msg="Failed to destroy network for sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.706757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4-shm.mount: Deactivated successfully. Jan 13 20:31:29.708614 containerd[1444]: time="2025-01-13T20:31:29.708563435Z" level=error msg="encountered an error cleaning up failed sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.708684 containerd[1444]: time="2025-01-13T20:31:29.708633925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.710063 kubelet[2613]: E0113 20:31:29.710022 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.710135 kubelet[2613]: E0113 20:31:29.710077 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:29.710135 kubelet[2613]: E0113 20:31:29.710103 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:29.710204 kubelet[2613]: E0113 20:31:29.710149 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xmsjq_calico-system(d62b149c-90ef-4582-bf5b-b3dad659f453)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xmsjq_calico-system(d62b149c-90ef-4582-bf5b-b3dad659f453)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xmsjq" podUID="d62b149c-90ef-4582-bf5b-b3dad659f453" Jan 13 20:31:29.716221 containerd[1444]: time="2025-01-13T20:31:29.716181656Z" level=error msg="Failed to destroy network for sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.716658 containerd[1444]: time="2025-01-13T20:31:29.716629081Z" level=error msg="encountered an error cleaning up failed sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.717069 containerd[1444]: time="2025-01-13T20:31:29.716773302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.718312 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30-shm.mount: Deactivated successfully. Jan 13 20:31:29.718585 kubelet[2613]: E0113 20:31:29.718483 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.718585 kubelet[2613]: E0113 20:31:29.718539 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:29.718585 kubelet[2613]: E0113 20:31:29.718580 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:29.718692 kubelet[2613]: E0113 20:31:29.718641 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f47bd54-kjr8q_calico-apiserver(1cfacfd0-9476-421d-9ba4-8948bbbe88e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f47bd54-kjr8q_calico-apiserver(1cfacfd0-9476-421d-9ba4-8948bbbe88e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" podUID="1cfacfd0-9476-421d-9ba4-8948bbbe88e8" Jan 13 20:31:29.741396 containerd[1444]: time="2025-01-13T20:31:29.741236519Z" level=error msg="Failed to destroy network for sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.741643 containerd[1444]: time="2025-01-13T20:31:29.741608412Z" level=error msg="encountered an error cleaning up failed sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.741715 containerd[1444]: time="2025-01-13T20:31:29.741668901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.741923 kubelet[2613]: E0113 20:31:29.741891 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.741983 kubelet[2613]: E0113 20:31:29.741943 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:29.741983 kubelet[2613]: E0113 20:31:29.741967 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:29.742037 kubelet[2613]: E0113 20:31:29.742023 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f47bd54-dc6xn_calico-apiserver(ef49bc1a-4bb1-4428-954b-8600a024bc5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f47bd54-dc6xn_calico-apiserver(ef49bc1a-4bb1-4428-954b-8600a024bc5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" podUID="ef49bc1a-4bb1-4428-954b-8600a024bc5a" Jan 13 20:31:29.743815 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186-shm.mount: Deactivated successfully. Jan 13 20:31:29.748052 containerd[1444]: time="2025-01-13T20:31:29.747920765Z" level=error msg="Failed to destroy network for sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.748418 containerd[1444]: time="2025-01-13T20:31:29.748372270Z" level=error msg="encountered an error cleaning up failed sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.748535 containerd[1444]: time="2025-01-13T20:31:29.748513211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.750072 kubelet[2613]: E0113 20:31:29.750045 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.750151 kubelet[2613]: E0113 20:31:29.750103 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:29.750151 kubelet[2613]: E0113 20:31:29.750123 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:29.750213 kubelet[2613]: E0113 20:31:29.750168 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85d666cdf5-4jgmf_calico-system(b42b7cb4-9adc-45b0-a43d-a62a51c30a4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85d666cdf5-4jgmf_calico-system(b42b7cb4-9adc-45b0-a43d-a62a51c30a4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" podUID="b42b7cb4-9adc-45b0-a43d-a62a51c30a4e" Jan 13 20:31:29.750449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4-shm.mount: Deactivated successfully. Jan 13 20:31:29.756110 containerd[1444]: time="2025-01-13T20:31:29.755464416Z" level=error msg="Failed to destroy network for sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.756110 containerd[1444]: time="2025-01-13T20:31:29.755723333Z" level=error msg="Failed to destroy network for sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.756332 containerd[1444]: time="2025-01-13T20:31:29.756296016Z" level=error msg="encountered an error cleaning up failed sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.756397 containerd[1444]: time="2025-01-13T20:31:29.756361505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.756589 kubelet[2613]: E0113 20:31:29.756563 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.756661 kubelet[2613]: E0113 20:31:29.756620 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:29.756661 kubelet[2613]: E0113 20:31:29.756639 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:29.756719 kubelet[2613]: E0113 20:31:29.756684 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hlvx2_kube-system(e9fd979c-1ebe-4de8-a229-23c188a43678)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hlvx2_kube-system(e9fd979c-1ebe-4de8-a229-23c188a43678)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hlvx2" podUID="e9fd979c-1ebe-4de8-a229-23c188a43678" Jan 13 20:31:29.757720 containerd[1444]: time="2025-01-13T20:31:29.757650372Z" level=error msg="encountered an error cleaning up failed sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.758359 containerd[1444]: time="2025-01-13T20:31:29.758294745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.758911 kubelet[2613]: E0113 20:31:29.758875 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:29.758988 kubelet[2613]: E0113 20:31:29.758921 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:29.758988 kubelet[2613]: E0113 20:31:29.758941 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:29.759407 kubelet[2613]: E0113 20:31:29.758988 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-v72tl_kube-system(998cfadf-febb-495f-927c-5b5b4a548933)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-v72tl_kube-system(998cfadf-febb-495f-927c-5b5b4a548933)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-v72tl" podUID="998cfadf-febb-495f-927c-5b5b4a548933" Jan 13 20:31:30.411106 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:43592.service - OpenSSH per-connection server daemon (10.0.0.1:43592). Jan 13 20:31:30.492776 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 43592 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:31:30.495312 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:30.502324 systemd-logind[1423]: New session 8 of user core. Jan 13 20:31:30.507550 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:31:30.614154 kubelet[2613]: I0113 20:31:30.614116 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4" Jan 13 20:31:30.616631 containerd[1444]: time="2025-01-13T20:31:30.614821799Z" level=info msg="StopPodSandbox for \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\"" Jan 13 20:31:30.616631 containerd[1444]: time="2025-01-13T20:31:30.615000383Z" level=info msg="Ensure that sandbox d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4 in task-service has been cleanup successfully" Jan 13 20:31:30.616631 containerd[1444]: time="2025-01-13T20:31:30.615291983Z" level=info msg="TearDown network for sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\" successfully" Jan 13 20:31:30.616631 containerd[1444]: time="2025-01-13T20:31:30.615306985Z" level=info msg="StopPodSandbox for \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\" returns successfully" Jan 13 20:31:30.616631 containerd[1444]: time="2025-01-13T20:31:30.615522894Z" level=info msg="StopPodSandbox for \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\"" Jan 13 20:31:30.616631 containerd[1444]: time="2025-01-13T20:31:30.615586863Z" level=info msg="TearDown network for sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" successfully" Jan 13 20:31:30.616631 containerd[1444]: time="2025-01-13T20:31:30.615595624Z" level=info msg="StopPodSandbox for \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" returns successfully" Jan 13 20:31:30.616631 containerd[1444]: time="2025-01-13T20:31:30.615828976Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\"" Jan 13 20:31:30.616631 containerd[1444]: time="2025-01-13T20:31:30.615908746Z" level=info msg="TearDown network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" successfully" Jan 13 20:31:30.616631 containerd[1444]: time="2025-01-13T20:31:30.615918988Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" returns successfully" Jan 13 20:31:30.617201 containerd[1444]: time="2025-01-13T20:31:30.616864876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:3,}" Jan 13 20:31:30.619167 kubelet[2613]: I0113 20:31:30.618595 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4" Jan 13 20:31:30.620782 kubelet[2613]: I0113 20:31:30.620476 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89" Jan 13 20:31:30.621879 containerd[1444]: time="2025-01-13T20:31:30.621789544Z" level=info msg="StopPodSandbox for \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\"" Jan 13 20:31:30.622149 containerd[1444]: time="2025-01-13T20:31:30.622057620Z" level=info msg="StopPodSandbox for \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\"" Jan 13 20:31:30.622399 containerd[1444]: time="2025-01-13T20:31:30.622354140Z" level=info msg="Ensure that sandbox 27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4 in task-service has been cleanup successfully" Jan 13 20:31:30.623365 containerd[1444]: time="2025-01-13T20:31:30.622599053Z" level=info msg="Ensure that sandbox 0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89 in task-service has been cleanup successfully" Jan 13 20:31:30.623365 containerd[1444]: time="2025-01-13T20:31:30.623068717Z" level=info msg="TearDown network for sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\" successfully" Jan 13 20:31:30.623365 containerd[1444]: time="2025-01-13T20:31:30.623193054Z" level=info msg="StopPodSandbox for \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\" returns successfully" Jan 13 20:31:30.623365 containerd[1444]: time="2025-01-13T20:31:30.623073158Z" level=info msg="TearDown network for sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\" successfully" Jan 13 20:31:30.623534 containerd[1444]: time="2025-01-13T20:31:30.623516938Z" level=info msg="StopPodSandbox for \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\" returns successfully" Jan 13 20:31:30.623561 kubelet[2613]: I0113 20:31:30.623437 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0" Jan 13 20:31:30.623925 containerd[1444]: time="2025-01-13T20:31:30.623716045Z" level=info msg="StopPodSandbox for \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\"" Jan 13 20:31:30.623925 containerd[1444]: time="2025-01-13T20:31:30.623802976Z" level=info msg="TearDown network for sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" successfully" Jan 13 20:31:30.623925 containerd[1444]: time="2025-01-13T20:31:30.623813698Z" level=info msg="StopPodSandbox for \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" returns successfully" Jan 13 20:31:30.624048 containerd[1444]: time="2025-01-13T20:31:30.623937075Z" level=info msg="StopPodSandbox for \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\"" Jan 13 20:31:30.624048 containerd[1444]: time="2025-01-13T20:31:30.623960958Z" level=info msg="StopPodSandbox for \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\"" Jan 13 20:31:30.624096 containerd[1444]: time="2025-01-13T20:31:30.624055811Z" level=info msg="TearDown network for sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" successfully" Jan 13 20:31:30.624096 containerd[1444]: time="2025-01-13T20:31:30.624070253Z" level=info msg="StopPodSandbox for \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" returns successfully" Jan 13 20:31:30.624161 containerd[1444]: time="2025-01-13T20:31:30.624127901Z" level=info msg="Ensure that sandbox 3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0 in task-service has been cleanup successfully" Jan 13 20:31:30.624423 containerd[1444]: time="2025-01-13T20:31:30.624402898Z" level=info msg="TearDown network for sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\" successfully" Jan 13 20:31:30.624423 containerd[1444]: time="2025-01-13T20:31:30.624421220Z" level=info msg="StopPodSandbox for \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\" returns successfully" Jan 13 20:31:30.625247 containerd[1444]: time="2025-01-13T20:31:30.624764467Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\"" Jan 13 20:31:30.625247 containerd[1444]: time="2025-01-13T20:31:30.625092151Z" level=info msg="TearDown network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" successfully" Jan 13 20:31:30.625247 containerd[1444]: time="2025-01-13T20:31:30.625103233Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" returns successfully" Jan 13 20:31:30.625416 containerd[1444]: time="2025-01-13T20:31:30.625357707Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\"" Jan 13 20:31:30.626070 containerd[1444]: time="2025-01-13T20:31:30.625483404Z" level=info msg="TearDown network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" successfully" Jan 13 20:31:30.626070 containerd[1444]: time="2025-01-13T20:31:30.625500207Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" returns successfully" Jan 13 20:31:30.626169 kubelet[2613]: E0113 20:31:30.626062 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:30.627009 containerd[1444]: time="2025-01-13T20:31:30.626829227Z" level=info msg="StopPodSandbox for \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\"" Jan 13 20:31:30.627009 containerd[1444]: time="2025-01-13T20:31:30.626900396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:3,}" Jan 13 20:31:30.627009 containerd[1444]: time="2025-01-13T20:31:30.626921999Z" level=info msg="TearDown network for sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" successfully" Jan 13 20:31:30.627009 containerd[1444]: time="2025-01-13T20:31:30.626932681Z" level=info msg="StopPodSandbox for \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" returns successfully" Jan 13 20:31:30.627009 containerd[1444]: time="2025-01-13T20:31:30.626973366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:3,}" Jan 13 20:31:30.629272 containerd[1444]: time="2025-01-13T20:31:30.629248235Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\"" Jan 13 20:31:30.629376 containerd[1444]: time="2025-01-13T20:31:30.629332726Z" level=info msg="TearDown network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" successfully" Jan 13 20:31:30.629376 containerd[1444]: time="2025-01-13T20:31:30.629344168Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" returns successfully" Jan 13 20:31:30.629604 kubelet[2613]: E0113 20:31:30.629555 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:30.629806 kubelet[2613]: I0113 20:31:30.629786 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30" Jan 13 20:31:30.630864 containerd[1444]: time="2025-01-13T20:31:30.630642944Z" level=info msg="StopPodSandbox for \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\"" Jan 13 20:31:30.630864 containerd[1444]: time="2025-01-13T20:31:30.630801845Z" level=info msg="Ensure that sandbox afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30 in task-service has been cleanup successfully" Jan 13 20:31:30.631855 containerd[1444]: time="2025-01-13T20:31:30.631753174Z" level=info msg="TearDown network for sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\" successfully" Jan 13 20:31:30.631855 containerd[1444]: time="2025-01-13T20:31:30.631780658Z" level=info msg="StopPodSandbox for \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\" returns successfully" Jan 13 20:31:30.632334 containerd[1444]: time="2025-01-13T20:31:30.632037853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:3,}" Jan 13 20:31:30.632334 containerd[1444]: time="2025-01-13T20:31:30.632159629Z" level=info msg="StopPodSandbox for \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\"" Jan 13 20:31:30.632334 containerd[1444]: time="2025-01-13T20:31:30.632239720Z" level=info msg="TearDown network for sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" successfully" Jan 13 20:31:30.632334 containerd[1444]: time="2025-01-13T20:31:30.632248601Z" level=info msg="StopPodSandbox for \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" returns successfully" Jan 13 20:31:30.633788 containerd[1444]: time="2025-01-13T20:31:30.633744204Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\"" Jan 13 20:31:30.633839 containerd[1444]: time="2025-01-13T20:31:30.633829456Z" level=info msg="TearDown network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" successfully" Jan 13 20:31:30.634035 containerd[1444]: time="2025-01-13T20:31:30.634009240Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" returns successfully" Jan 13 20:31:30.634522 containerd[1444]: time="2025-01-13T20:31:30.634492345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:31:30.635582 kubelet[2613]: I0113 20:31:30.635371 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186" Jan 13 20:31:30.635860 containerd[1444]: time="2025-01-13T20:31:30.635832287Z" level=info msg="StopPodSandbox for \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\"" Jan 13 20:31:30.636020 containerd[1444]: time="2025-01-13T20:31:30.635972986Z" level=info msg="Ensure that sandbox f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186 in task-service has been cleanup successfully" Jan 13 20:31:30.636207 containerd[1444]: time="2025-01-13T20:31:30.636144689Z" level=info msg="TearDown network for sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\" successfully" Jan 13 20:31:30.636207 containerd[1444]: time="2025-01-13T20:31:30.636161652Z" level=info msg="StopPodSandbox for \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\" returns successfully" Jan 13 20:31:30.637016 containerd[1444]: time="2025-01-13T20:31:30.636978882Z" level=info msg="StopPodSandbox for \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\"" Jan 13 20:31:30.638829 containerd[1444]: time="2025-01-13T20:31:30.638677553Z" level=info msg="TearDown network for sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" successfully" Jan 13 20:31:30.638829 containerd[1444]: time="2025-01-13T20:31:30.638705236Z" level=info msg="StopPodSandbox for \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" returns successfully" Jan 13 20:31:30.641443 systemd[1]: run-netns-cni\x2da15cce9a\x2d2ec0\x2d7e62\x2d7adc\x2dd86b6739edf1.mount: Deactivated successfully. Jan 13 20:31:30.641531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89-shm.mount: Deactivated successfully. Jan 13 20:31:30.641588 systemd[1]: run-netns-cni\x2ddb0276cb\x2de565\x2d1c90\x2ddc65\x2d84109729100c.mount: Deactivated successfully. Jan 13 20:31:30.641633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0-shm.mount: Deactivated successfully. Jan 13 20:31:30.641685 systemd[1]: run-netns-cni\x2df8846c86\x2ddf9e\x2dc00f\x2d15f0\x2d49d6c676d81a.mount: Deactivated successfully. Jan 13 20:31:30.641733 systemd[1]: run-netns-cni\x2d584f26a7\x2da6fe\x2d43b6\x2ddbdc\x2d118372941fde.mount: Deactivated successfully. Jan 13 20:31:30.641774 systemd[1]: run-netns-cni\x2d62f29d73\x2df01c\x2d5041\x2dc1dd\x2df4da91473c51.mount: Deactivated successfully. Jan 13 20:31:30.641820 systemd[1]: run-netns-cni\x2db9072646\x2d0d9d\x2d92b5\x2d271a\x2dc22c44243b0e.mount: Deactivated successfully. Jan 13 20:31:30.642834 containerd[1444]: time="2025-01-13T20:31:30.642800952Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\"" Jan 13 20:31:30.642915 containerd[1444]: time="2025-01-13T20:31:30.642898405Z" level=info msg="TearDown network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" successfully" Jan 13 20:31:30.642950 containerd[1444]: time="2025-01-13T20:31:30.642913607Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" returns successfully" Jan 13 20:31:30.643371 containerd[1444]: time="2025-01-13T20:31:30.643341585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:31:30.833902 sshd[4041]: Connection closed by 10.0.0.1 port 43592 Jan 13 20:31:30.834257 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:30.839119 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:43592.service: Deactivated successfully. Jan 13 20:31:30.841247 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:31:30.842372 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:31:30.843689 systemd-logind[1423]: Removed session 8. Jan 13 20:31:31.072702 containerd[1444]: time="2025-01-13T20:31:31.072544759Z" level=error msg="Failed to destroy network for sandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.074228 containerd[1444]: time="2025-01-13T20:31:31.073120312Z" level=error msg="encountered an error cleaning up failed sandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.074228 containerd[1444]: time="2025-01-13T20:31:31.073180639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.074686 kubelet[2613]: E0113 20:31:31.074607 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.074686 kubelet[2613]: E0113 20:31:31.074669 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:31.074686 kubelet[2613]: E0113 20:31:31.074690 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:31.074884 kubelet[2613]: E0113 20:31:31.074746 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xmsjq_calico-system(d62b149c-90ef-4582-bf5b-b3dad659f453)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xmsjq_calico-system(d62b149c-90ef-4582-bf5b-b3dad659f453)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xmsjq" podUID="d62b149c-90ef-4582-bf5b-b3dad659f453" Jan 13 20:31:31.097545 containerd[1444]: time="2025-01-13T20:31:31.097418879Z" level=error msg="Failed to destroy network for sandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.098142 containerd[1444]: time="2025-01-13T20:31:31.098110527Z" level=error msg="encountered an error cleaning up failed sandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.098203 containerd[1444]: time="2025-01-13T20:31:31.098185417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.098446 kubelet[2613]: E0113 20:31:31.098422 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.098515 kubelet[2613]: E0113 20:31:31.098475 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:31.098515 kubelet[2613]: E0113 20:31:31.098494 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:31.098565 kubelet[2613]: E0113 20:31:31.098550 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85d666cdf5-4jgmf_calico-system(b42b7cb4-9adc-45b0-a43d-a62a51c30a4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85d666cdf5-4jgmf_calico-system(b42b7cb4-9adc-45b0-a43d-a62a51c30a4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" podUID="b42b7cb4-9adc-45b0-a43d-a62a51c30a4e" Jan 13 20:31:31.126612 containerd[1444]: time="2025-01-13T20:31:31.126465330Z" level=error msg="Failed to destroy network for sandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.127113 containerd[1444]: time="2025-01-13T20:31:31.126940431Z" level=error msg="encountered an error cleaning up failed sandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.127113 containerd[1444]: time="2025-01-13T20:31:31.127009680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.127301 kubelet[2613]: E0113 20:31:31.127271 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.127516 kubelet[2613]: E0113 20:31:31.127494 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:31.127580 kubelet[2613]: E0113 20:31:31.127526 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:31.127610 kubelet[2613]: E0113 20:31:31.127590 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hlvx2_kube-system(e9fd979c-1ebe-4de8-a229-23c188a43678)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hlvx2_kube-system(e9fd979c-1ebe-4de8-a229-23c188a43678)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hlvx2" podUID="e9fd979c-1ebe-4de8-a229-23c188a43678" Jan 13 20:31:31.142549 containerd[1444]: time="2025-01-13T20:31:31.142485366Z" level=error msg="Failed to destroy network for sandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.142971 containerd[1444]: time="2025-01-13T20:31:31.142911020Z" level=error msg="encountered an error cleaning up failed sandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.143221 containerd[1444]: time="2025-01-13T20:31:31.143193016Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.143566 kubelet[2613]: E0113 20:31:31.143538 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.143643 kubelet[2613]: E0113 20:31:31.143593 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:31.143643 kubelet[2613]: E0113 20:31:31.143630 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:31.143728 kubelet[2613]: E0113 20:31:31.143710 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-v72tl_kube-system(998cfadf-febb-495f-927c-5b5b4a548933)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-v72tl_kube-system(998cfadf-febb-495f-927c-5b5b4a548933)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-v72tl" podUID="998cfadf-febb-495f-927c-5b5b4a548933" Jan 13 20:31:31.147143 containerd[1444]: time="2025-01-13T20:31:31.147100073Z" level=error msg="Failed to destroy network for sandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.148503 containerd[1444]: time="2025-01-13T20:31:31.148462166Z" level=error msg="encountered an error cleaning up failed sandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.148571 containerd[1444]: time="2025-01-13T20:31:31.148528414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.148745 kubelet[2613]: E0113 20:31:31.148715 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.148803 kubelet[2613]: E0113 20:31:31.148764 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:31.148803 kubelet[2613]: E0113 20:31:31.148784 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:31.148861 kubelet[2613]: E0113 20:31:31.148836 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f47bd54-dc6xn_calico-apiserver(ef49bc1a-4bb1-4428-954b-8600a024bc5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f47bd54-dc6xn_calico-apiserver(ef49bc1a-4bb1-4428-954b-8600a024bc5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" podUID="ef49bc1a-4bb1-4428-954b-8600a024bc5a" Jan 13 20:31:31.152453 containerd[1444]: time="2025-01-13T20:31:31.152423349Z" level=error msg="Failed to destroy network for sandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.153109 containerd[1444]: time="2025-01-13T20:31:31.152964658Z" level=error msg="encountered an error cleaning up failed sandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.153109 containerd[1444]: time="2025-01-13T20:31:31.153018265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.153239 kubelet[2613]: E0113 20:31:31.153214 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.153280 kubelet[2613]: E0113 20:31:31.153253 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:31.153280 kubelet[2613]: E0113 20:31:31.153271 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:31.153335 kubelet[2613]: E0113 20:31:31.153328 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f47bd54-kjr8q_calico-apiserver(1cfacfd0-9476-421d-9ba4-8948bbbe88e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f47bd54-kjr8q_calico-apiserver(1cfacfd0-9476-421d-9ba4-8948bbbe88e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" podUID="1cfacfd0-9476-421d-9ba4-8948bbbe88e8" Jan 13 20:31:31.496713 containerd[1444]: time="2025-01-13T20:31:31.496592884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:31.497594 containerd[1444]: time="2025-01-13T20:31:31.497503800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 13 20:31:31.498144 containerd[1444]: time="2025-01-13T20:31:31.498102316Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:31.499907 containerd[1444]: time="2025-01-13T20:31:31.499880102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:31.501227 containerd[1444]: time="2025-01-13T20:31:31.501201110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.934440549s" Jan 13 20:31:31.501279 containerd[1444]: time="2025-01-13T20:31:31.501234914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 13 20:31:31.507565 containerd[1444]: time="2025-01-13T20:31:31.507522113Z" level=info msg="CreateContainer within sandbox \"60132f54cafd53910002c82a78e2483c8463da594266668a045c92323a0a829d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:31:31.525077 containerd[1444]: time="2025-01-13T20:31:31.525030978Z" level=info msg="CreateContainer within sandbox \"60132f54cafd53910002c82a78e2483c8463da594266668a045c92323a0a829d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2163edc8d0da86d17eb01faaeb1e5cd65981dc7910f2e96263c31471df870937\"" Jan 13 20:31:31.525735 containerd[1444]: time="2025-01-13T20:31:31.525708864Z" level=info msg="StartContainer for \"2163edc8d0da86d17eb01faaeb1e5cd65981dc7910f2e96263c31471df870937\"" Jan 13 20:31:31.583592 systemd[1]: Started cri-containerd-2163edc8d0da86d17eb01faaeb1e5cd65981dc7910f2e96263c31471df870937.scope - libcontainer container 2163edc8d0da86d17eb01faaeb1e5cd65981dc7910f2e96263c31471df870937. Jan 13 20:31:31.612905 containerd[1444]: time="2025-01-13T20:31:31.612850898Z" level=info msg="StartContainer for \"2163edc8d0da86d17eb01faaeb1e5cd65981dc7910f2e96263c31471df870937\" returns successfully" Jan 13 20:31:31.639363 kubelet[2613]: I0113 20:31:31.639329 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef" Jan 13 20:31:31.640967 containerd[1444]: time="2025-01-13T20:31:31.640927346Z" level=info msg="StopPodSandbox for \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\"" Jan 13 20:31:31.641221 containerd[1444]: time="2025-01-13T20:31:31.641093127Z" level=info msg="Ensure that sandbox 43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef in task-service has been cleanup successfully" Jan 13 20:31:31.641344 containerd[1444]: time="2025-01-13T20:31:31.641324196Z" level=info msg="TearDown network for sandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\" successfully" Jan 13 20:31:31.641404 containerd[1444]: time="2025-01-13T20:31:31.641343118Z" level=info msg="StopPodSandbox for \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\" returns successfully" Jan 13 20:31:31.641866 containerd[1444]: time="2025-01-13T20:31:31.641841382Z" level=info msg="StopPodSandbox for \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\"" Jan 13 20:31:31.642231 containerd[1444]: time="2025-01-13T20:31:31.641914031Z" level=info msg="TearDown network for sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\" successfully" Jan 13 20:31:31.642231 containerd[1444]: time="2025-01-13T20:31:31.641923952Z" level=info msg="StopPodSandbox for \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\" returns successfully" Jan 13 20:31:31.642292 containerd[1444]: time="2025-01-13T20:31:31.642271996Z" level=info msg="StopPodSandbox for \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\"" Jan 13 20:31:31.642360 containerd[1444]: time="2025-01-13T20:31:31.642330404Z" level=info msg="TearDown network for sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" successfully" Jan 13 20:31:31.642360 containerd[1444]: time="2025-01-13T20:31:31.642346366Z" level=info msg="StopPodSandbox for \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" returns successfully" Jan 13 20:31:31.642927 containerd[1444]: time="2025-01-13T20:31:31.642900556Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\"" Jan 13 20:31:31.642996 containerd[1444]: time="2025-01-13T20:31:31.642978246Z" level=info msg="TearDown network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" successfully" Jan 13 20:31:31.642996 containerd[1444]: time="2025-01-13T20:31:31.642992568Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" returns successfully" Jan 13 20:31:31.643326 kubelet[2613]: E0113 20:31:31.643195 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:31.643326 kubelet[2613]: I0113 20:31:31.643207 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445" Jan 13 20:31:31.643701 containerd[1444]: time="2025-01-13T20:31:31.643672294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:4,}" Jan 13 20:31:31.644730 containerd[1444]: time="2025-01-13T20:31:31.644693304Z" level=info msg="StopPodSandbox for \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\"" Jan 13 20:31:31.644862 containerd[1444]: time="2025-01-13T20:31:31.644838403Z" level=info msg="Ensure that sandbox 0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445 in task-service has been cleanup successfully" Jan 13 20:31:31.645142 containerd[1444]: time="2025-01-13T20:31:31.645014145Z" level=info msg="TearDown network for sandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\" successfully" Jan 13 20:31:31.645142 containerd[1444]: time="2025-01-13T20:31:31.645034307Z" level=info msg="StopPodSandbox for \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\" returns successfully" Jan 13 20:31:31.646320 containerd[1444]: time="2025-01-13T20:31:31.646264904Z" level=info msg="StopPodSandbox for \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\"" Jan 13 20:31:31.646467 containerd[1444]: time="2025-01-13T20:31:31.646365557Z" level=info msg="TearDown network for sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\" successfully" Jan 13 20:31:31.646467 containerd[1444]: time="2025-01-13T20:31:31.646377358Z" level=info msg="StopPodSandbox for \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\" returns successfully" Jan 13 20:31:31.646728 containerd[1444]: time="2025-01-13T20:31:31.646705200Z" level=info msg="StopPodSandbox for \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\"" Jan 13 20:31:31.646899 containerd[1444]: time="2025-01-13T20:31:31.646875661Z" level=info msg="TearDown network for sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" successfully" Jan 13 20:31:31.646899 containerd[1444]: time="2025-01-13T20:31:31.646892944Z" level=info msg="StopPodSandbox for \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" returns successfully" Jan 13 20:31:31.647903 containerd[1444]: time="2025-01-13T20:31:31.647823622Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\"" Jan 13 20:31:31.647983 containerd[1444]: time="2025-01-13T20:31:31.647912793Z" level=info msg="TearDown network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" successfully" Jan 13 20:31:31.647983 containerd[1444]: time="2025-01-13T20:31:31.647923715Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" returns successfully" Jan 13 20:31:31.648828 containerd[1444]: time="2025-01-13T20:31:31.648404256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:31:31.648862 kubelet[2613]: I0113 20:31:31.648199 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7" Jan 13 20:31:31.650937 systemd[1]: run-netns-cni\x2d063b2aa7\x2dc282\x2d492b\x2d543b\x2d6d670ea3c773.mount: Deactivated successfully. Jan 13 20:31:31.652749 containerd[1444]: time="2025-01-13T20:31:31.651529853Z" level=info msg="StopPodSandbox for \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\"" Jan 13 20:31:31.652749 containerd[1444]: time="2025-01-13T20:31:31.651677152Z" level=info msg="Ensure that sandbox a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7 in task-service has been cleanup successfully" Jan 13 20:31:31.651022 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef-shm.mount: Deactivated successfully. Jan 13 20:31:31.651199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3-shm.mount: Deactivated successfully. Jan 13 20:31:31.652884 containerd[1444]: time="2025-01-13T20:31:31.652825537Z" level=info msg="TearDown network for sandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\" successfully" Jan 13 20:31:31.652884 containerd[1444]: time="2025-01-13T20:31:31.652845580Z" level=info msg="StopPodSandbox for \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\" returns successfully" Jan 13 20:31:31.651272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9-shm.mount: Deactivated successfully. Jan 13 20:31:31.651326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033733855.mount: Deactivated successfully. Jan 13 20:31:31.653555 containerd[1444]: time="2025-01-13T20:31:31.653522826Z" level=info msg="StopPodSandbox for \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\"" Jan 13 20:31:31.653650 containerd[1444]: time="2025-01-13T20:31:31.653615638Z" level=info msg="TearDown network for sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\" successfully" Jan 13 20:31:31.653650 containerd[1444]: time="2025-01-13T20:31:31.653629080Z" level=info msg="StopPodSandbox for \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\" returns successfully" Jan 13 20:31:31.654266 containerd[1444]: time="2025-01-13T20:31:31.654232676Z" level=info msg="StopPodSandbox for \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\"" Jan 13 20:31:31.654343 containerd[1444]: time="2025-01-13T20:31:31.654323288Z" level=info msg="TearDown network for sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" successfully" Jan 13 20:31:31.654343 containerd[1444]: time="2025-01-13T20:31:31.654333249Z" level=info msg="StopPodSandbox for \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" returns successfully" Jan 13 20:31:31.654770 containerd[1444]: time="2025-01-13T20:31:31.654713857Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\"" Jan 13 20:31:31.654828 containerd[1444]: time="2025-01-13T20:31:31.654815990Z" level=info msg="TearDown network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" successfully" Jan 13 20:31:31.654861 containerd[1444]: time="2025-01-13T20:31:31.654826432Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" returns successfully" Jan 13 20:31:31.656607 containerd[1444]: time="2025-01-13T20:31:31.656569653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:31:31.659456 kubelet[2613]: E0113 20:31:31.658779 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:31.661409 kubelet[2613]: I0113 20:31:31.660942 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9" Jan 13 20:31:31.661489 containerd[1444]: time="2025-01-13T20:31:31.661463715Z" level=info msg="StopPodSandbox for \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\"" Jan 13 20:31:31.661806 containerd[1444]: time="2025-01-13T20:31:31.661619255Z" level=info msg="Ensure that sandbox 155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9 in task-service has been cleanup successfully" Jan 13 20:31:31.663582 kubelet[2613]: I0113 20:31:31.663287 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3" Jan 13 20:31:31.664266 containerd[1444]: time="2025-01-13T20:31:31.663865540Z" level=info msg="StopPodSandbox for \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\"" Jan 13 20:31:31.664266 containerd[1444]: time="2025-01-13T20:31:31.664017240Z" level=info msg="Ensure that sandbox 575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3 in task-service has been cleanup successfully" Jan 13 20:31:31.664266 containerd[1444]: time="2025-01-13T20:31:31.664212104Z" level=info msg="TearDown network for sandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\" successfully" Jan 13 20:31:31.664266 containerd[1444]: time="2025-01-13T20:31:31.664227466Z" level=info msg="StopPodSandbox for \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\" returns successfully" Jan 13 20:31:31.664436 containerd[1444]: time="2025-01-13T20:31:31.664363684Z" level=info msg="TearDown network for sandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\" successfully" Jan 13 20:31:31.664436 containerd[1444]: time="2025-01-13T20:31:31.664375845Z" level=info msg="StopPodSandbox for \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\" returns successfully" Jan 13 20:31:31.664430 systemd[1]: run-netns-cni\x2d3cd02d48\x2db415\x2dd44d\x2db4fd\x2d5fa769de32ff.mount: Deactivated successfully. Jan 13 20:31:31.664509 systemd[1]: run-netns-cni\x2d25aafae3\x2d7a2a\x2de04c\x2d1fdb\x2dca41a6481d47.mount: Deactivated successfully. Jan 13 20:31:31.666023 containerd[1444]: time="2025-01-13T20:31:31.665677491Z" level=info msg="StopPodSandbox for \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\"" Jan 13 20:31:31.666023 containerd[1444]: time="2025-01-13T20:31:31.665707534Z" level=info msg="StopPodSandbox for \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\"" Jan 13 20:31:31.666023 containerd[1444]: time="2025-01-13T20:31:31.665758701Z" level=info msg="TearDown network for sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\" successfully" Jan 13 20:31:31.666023 containerd[1444]: time="2025-01-13T20:31:31.665768622Z" level=info msg="StopPodSandbox for \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\" returns successfully" Jan 13 20:31:31.666023 containerd[1444]: time="2025-01-13T20:31:31.665785824Z" level=info msg="TearDown network for sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\" successfully" Jan 13 20:31:31.666023 containerd[1444]: time="2025-01-13T20:31:31.665795546Z" level=info msg="StopPodSandbox for \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\" returns successfully" Jan 13 20:31:31.666436 containerd[1444]: time="2025-01-13T20:31:31.666049578Z" level=info msg="StopPodSandbox for \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\"" Jan 13 20:31:31.666436 containerd[1444]: time="2025-01-13T20:31:31.666141750Z" level=info msg="TearDown network for sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" successfully" Jan 13 20:31:31.666436 containerd[1444]: time="2025-01-13T20:31:31.666151831Z" level=info msg="StopPodSandbox for \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" returns successfully" Jan 13 20:31:31.666436 containerd[1444]: time="2025-01-13T20:31:31.666234161Z" level=info msg="StopPodSandbox for \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\"" Jan 13 20:31:31.666436 containerd[1444]: time="2025-01-13T20:31:31.666305530Z" level=info msg="TearDown network for sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" successfully" Jan 13 20:31:31.666436 containerd[1444]: time="2025-01-13T20:31:31.666314132Z" level=info msg="StopPodSandbox for \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" returns successfully" Jan 13 20:31:31.666436 containerd[1444]: time="2025-01-13T20:31:31.666407583Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\"" Jan 13 20:31:31.666882 containerd[1444]: time="2025-01-13T20:31:31.666463591Z" level=info msg="TearDown network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" successfully" Jan 13 20:31:31.666882 containerd[1444]: time="2025-01-13T20:31:31.666473632Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" returns successfully" Jan 13 20:31:31.668044 systemd[1]: run-netns-cni\x2dd0db96f6\x2ded93\x2d17e0\x2d7952\x2dfb6c6df7fd69.mount: Deactivated successfully. Jan 13 20:31:31.668319 systemd[1]: run-netns-cni\x2d8f346808\x2df20a\x2d0cc1\x2dd39f\x2d25af8ebd7d95.mount: Deactivated successfully. Jan 13 20:31:31.669518 containerd[1444]: time="2025-01-13T20:31:31.669328755Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\"" Jan 13 20:31:31.669518 containerd[1444]: time="2025-01-13T20:31:31.669419126Z" level=info msg="TearDown network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" successfully" Jan 13 20:31:31.669518 containerd[1444]: time="2025-01-13T20:31:31.669430048Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" returns successfully" Jan 13 20:31:31.669602 containerd[1444]: time="2025-01-13T20:31:31.669534941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:4,}" Jan 13 20:31:31.671967 containerd[1444]: time="2025-01-13T20:31:31.671850715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:4,}" Jan 13 20:31:31.681727 kubelet[2613]: I0113 20:31:31.681412 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e" Jan 13 20:31:31.682341 containerd[1444]: time="2025-01-13T20:31:31.682015207Z" level=info msg="StopPodSandbox for \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\"" Jan 13 20:31:31.682341 containerd[1444]: time="2025-01-13T20:31:31.682196630Z" level=info msg="Ensure that sandbox fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e in task-service has been cleanup successfully" Jan 13 20:31:31.684339 systemd[1]: run-netns-cni\x2dd95172b3\x2db414\x2d7e24\x2dcba2\x2da321a23c4a9a.mount: Deactivated successfully. Jan 13 20:31:31.686457 containerd[1444]: time="2025-01-13T20:31:31.685197691Z" level=info msg="TearDown network for sandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\" successfully" Jan 13 20:31:31.686457 containerd[1444]: time="2025-01-13T20:31:31.685229975Z" level=info msg="StopPodSandbox for \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\" returns successfully" Jan 13 20:31:31.686858 containerd[1444]: time="2025-01-13T20:31:31.686829299Z" level=info msg="StopPodSandbox for \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\"" Jan 13 20:31:31.687023 containerd[1444]: time="2025-01-13T20:31:31.687007001Z" level=info msg="TearDown network for sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\" successfully" Jan 13 20:31:31.687107 containerd[1444]: time="2025-01-13T20:31:31.687092172Z" level=info msg="StopPodSandbox for \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\" returns successfully" Jan 13 20:31:31.692484 containerd[1444]: time="2025-01-13T20:31:31.692446892Z" level=info msg="StopPodSandbox for \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\"" Jan 13 20:31:31.692558 containerd[1444]: time="2025-01-13T20:31:31.692542344Z" level=info msg="TearDown network for sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" successfully" Jan 13 20:31:31.692558 containerd[1444]: time="2025-01-13T20:31:31.692551946Z" level=info msg="StopPodSandbox for \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" returns successfully" Jan 13 20:31:31.693159 containerd[1444]: time="2025-01-13T20:31:31.693137860Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\"" Jan 13 20:31:31.693335 containerd[1444]: time="2025-01-13T20:31:31.693319563Z" level=info msg="TearDown network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" successfully" Jan 13 20:31:31.693425 containerd[1444]: time="2025-01-13T20:31:31.693411215Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" returns successfully" Jan 13 20:31:31.693742 kubelet[2613]: E0113 20:31:31.693717 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:31.695252 containerd[1444]: time="2025-01-13T20:31:31.695054664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:4,}" Jan 13 20:31:31.764482 containerd[1444]: time="2025-01-13T20:31:31.763313218Z" level=error msg="Failed to destroy network for sandbox \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.768071 containerd[1444]: time="2025-01-13T20:31:31.768027017Z" level=error msg="encountered an error cleaning up failed sandbox \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.768556 containerd[1444]: time="2025-01-13T20:31:31.768443870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.768718 kubelet[2613]: E0113 20:31:31.768694 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.768835 kubelet[2613]: E0113 20:31:31.768746 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:31.768835 kubelet[2613]: E0113 20:31:31.768772 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hlvx2" Jan 13 20:31:31.768920 kubelet[2613]: E0113 20:31:31.768832 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hlvx2_kube-system(e9fd979c-1ebe-4de8-a229-23c188a43678)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hlvx2_kube-system(e9fd979c-1ebe-4de8-a229-23c188a43678)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hlvx2" podUID="e9fd979c-1ebe-4de8-a229-23c188a43678" Jan 13 20:31:31.772866 containerd[1444]: time="2025-01-13T20:31:31.772832307Z" level=error msg="Failed to destroy network for sandbox \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.773612 containerd[1444]: time="2025-01-13T20:31:31.773468188Z" level=error msg="encountered an error cleaning up failed sandbox \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.773612 containerd[1444]: time="2025-01-13T20:31:31.773525035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.773740 kubelet[2613]: E0113 20:31:31.773688 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.773740 kubelet[2613]: E0113 20:31:31.773732 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:31.773802 kubelet[2613]: E0113 20:31:31.773750 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" Jan 13 20:31:31.773802 kubelet[2613]: E0113 20:31:31.773798 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f47bd54-kjr8q_calico-apiserver(1cfacfd0-9476-421d-9ba4-8948bbbe88e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f47bd54-kjr8q_calico-apiserver(1cfacfd0-9476-421d-9ba4-8948bbbe88e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" podUID="1cfacfd0-9476-421d-9ba4-8948bbbe88e8" Jan 13 20:31:31.860435 containerd[1444]: time="2025-01-13T20:31:31.859918454Z" level=error msg="Failed to destroy network for sandbox \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.860435 containerd[1444]: time="2025-01-13T20:31:31.860357029Z" level=error msg="encountered an error cleaning up failed sandbox \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.860668 containerd[1444]: time="2025-01-13T20:31:31.860443080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.860882 kubelet[2613]: E0113 20:31:31.860676 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.860882 kubelet[2613]: E0113 20:31:31.860865 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:31.860975 kubelet[2613]: E0113 20:31:31.860900 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" Jan 13 20:31:31.861016 kubelet[2613]: E0113 20:31:31.860972 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85d666cdf5-4jgmf_calico-system(b42b7cb4-9adc-45b0-a43d-a62a51c30a4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85d666cdf5-4jgmf_calico-system(b42b7cb4-9adc-45b0-a43d-a62a51c30a4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" podUID="b42b7cb4-9adc-45b0-a43d-a62a51c30a4e" Jan 13 20:31:31.862299 containerd[1444]: time="2025-01-13T20:31:31.862263392Z" level=error msg="Failed to destroy network for sandbox \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.862622 containerd[1444]: time="2025-01-13T20:31:31.862588833Z" level=error msg="encountered an error cleaning up failed sandbox \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.862679 containerd[1444]: time="2025-01-13T20:31:31.862645080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.863443 kubelet[2613]: E0113 20:31:31.863146 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.863443 kubelet[2613]: E0113 20:31:31.863204 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:31.863443 kubelet[2613]: E0113 20:31:31.863225 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xmsjq" Jan 13 20:31:31.863585 kubelet[2613]: E0113 20:31:31.863330 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xmsjq_calico-system(d62b149c-90ef-4582-bf5b-b3dad659f453)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xmsjq_calico-system(d62b149c-90ef-4582-bf5b-b3dad659f453)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xmsjq" podUID="d62b149c-90ef-4582-bf5b-b3dad659f453" Jan 13 20:31:31.887353 containerd[1444]: time="2025-01-13T20:31:31.887304534Z" level=error msg="Failed to destroy network for sandbox \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.887634 containerd[1444]: time="2025-01-13T20:31:31.887600731Z" level=error msg="Failed to destroy network for sandbox \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.887714 containerd[1444]: time="2025-01-13T20:31:31.887683262Z" level=error msg="encountered an error cleaning up failed sandbox \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.887767 containerd[1444]: time="2025-01-13T20:31:31.887748830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.888903 kubelet[2613]: E0113 20:31:31.888548 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.888903 kubelet[2613]: E0113 20:31:31.888603 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:31.888903 kubelet[2613]: E0113 20:31:31.888623 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" Jan 13 20:31:31.889070 containerd[1444]: time="2025-01-13T20:31:31.888679629Z" level=error msg="encountered an error cleaning up failed sandbox \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.889070 containerd[1444]: time="2025-01-13T20:31:31.888749357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.889141 kubelet[2613]: E0113 20:31:31.888684 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7f47bd54-dc6xn_calico-apiserver(ef49bc1a-4bb1-4428-954b-8600a024bc5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7f47bd54-dc6xn_calico-apiserver(ef49bc1a-4bb1-4428-954b-8600a024bc5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" podUID="ef49bc1a-4bb1-4428-954b-8600a024bc5a" Jan 13 20:31:31.889141 kubelet[2613]: E0113 20:31:31.888934 2613 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:31:31.889141 kubelet[2613]: E0113 20:31:31.888974 2613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:31.889263 kubelet[2613]: E0113 20:31:31.888991 2613 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v72tl" Jan 13 20:31:31.890522 kubelet[2613]: E0113 20:31:31.890486 2613 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-v72tl_kube-system(998cfadf-febb-495f-927c-5b5b4a548933)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-v72tl_kube-system(998cfadf-febb-495f-927c-5b5b4a548933)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-v72tl" podUID="998cfadf-febb-495f-927c-5b5b4a548933" Jan 13 20:31:31.900255 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:31:31.900376 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:31:32.644475 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c-shm.mount: Deactivated successfully. Jan 13 20:31:32.685772 kubelet[2613]: I0113 20:31:32.685724 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e" Jan 13 20:31:32.686614 containerd[1444]: time="2025-01-13T20:31:32.686303947Z" level=info msg="StopPodSandbox for \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\"" Jan 13 20:31:32.686614 containerd[1444]: time="2025-01-13T20:31:32.686501531Z" level=info msg="Ensure that sandbox 365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e in task-service has been cleanup successfully" Jan 13 20:31:32.687099 containerd[1444]: time="2025-01-13T20:31:32.686945384Z" level=info msg="TearDown network for sandbox \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\" successfully" Jan 13 20:31:32.687099 containerd[1444]: time="2025-01-13T20:31:32.686977628Z" level=info msg="StopPodSandbox for \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\" returns successfully" Jan 13 20:31:32.688474 systemd[1]: run-netns-cni\x2d3cf9d9b6\x2df33f\x2d8887\x2d65d6\x2d29a86dec7b2f.mount: Deactivated successfully. Jan 13 20:31:32.689217 containerd[1444]: time="2025-01-13T20:31:32.689190611Z" level=info msg="StopPodSandbox for \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\"" Jan 13 20:31:32.689398 containerd[1444]: time="2025-01-13T20:31:32.689288063Z" level=info msg="TearDown network for sandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\" successfully" Jan 13 20:31:32.689398 containerd[1444]: time="2025-01-13T20:31:32.689303905Z" level=info msg="StopPodSandbox for \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\" returns successfully" Jan 13 20:31:32.689642 containerd[1444]: time="2025-01-13T20:31:32.689613181Z" level=info msg="StopPodSandbox for \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\"" Jan 13 20:31:32.689711 containerd[1444]: time="2025-01-13T20:31:32.689698912Z" level=info msg="TearDown network for sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\" successfully" Jan 13 20:31:32.689711 containerd[1444]: time="2025-01-13T20:31:32.689709673Z" level=info msg="StopPodSandbox for \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\" returns successfully" Jan 13 20:31:32.690298 containerd[1444]: time="2025-01-13T20:31:32.690258218Z" level=info msg="StopPodSandbox for \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\"" Jan 13 20:31:32.690799 containerd[1444]: time="2025-01-13T20:31:32.690755918Z" level=info msg="TearDown network for sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" successfully" Jan 13 20:31:32.690799 containerd[1444]: time="2025-01-13T20:31:32.690805484Z" level=info msg="StopPodSandbox for \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" returns successfully" Jan 13 20:31:32.691530 containerd[1444]: time="2025-01-13T20:31:32.691396794Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\"" Jan 13 20:31:32.691530 containerd[1444]: time="2025-01-13T20:31:32.691480604Z" level=info msg="TearDown network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" successfully" Jan 13 20:31:32.691530 containerd[1444]: time="2025-01-13T20:31:32.691491285Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" returns successfully" Jan 13 20:31:32.691638 kubelet[2613]: I0113 20:31:32.691405 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5" Jan 13 20:31:32.692221 containerd[1444]: time="2025-01-13T20:31:32.692101358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:5,}" Jan 13 20:31:32.692589 containerd[1444]: time="2025-01-13T20:31:32.692566333Z" level=info msg="StopPodSandbox for \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\"" Jan 13 20:31:32.692783 containerd[1444]: time="2025-01-13T20:31:32.692763277Z" level=info msg="Ensure that sandbox c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5 in task-service has been cleanup successfully" Jan 13 20:31:32.693154 containerd[1444]: time="2025-01-13T20:31:32.693096036Z" level=info msg="TearDown network for sandbox \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\" successfully" Jan 13 20:31:32.693154 containerd[1444]: time="2025-01-13T20:31:32.693124160Z" level=info msg="StopPodSandbox for \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\" returns successfully" Jan 13 20:31:32.694193 containerd[1444]: time="2025-01-13T20:31:32.693487043Z" level=info msg="StopPodSandbox for \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\"" Jan 13 20:31:32.694746 containerd[1444]: time="2025-01-13T20:31:32.694526847Z" level=info msg="TearDown network for sandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\" successfully" Jan 13 20:31:32.694746 containerd[1444]: time="2025-01-13T20:31:32.694548729Z" level=info msg="StopPodSandbox for \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\" returns successfully" Jan 13 20:31:32.696115 containerd[1444]: time="2025-01-13T20:31:32.695591414Z" level=info msg="StopPodSandbox for \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\"" Jan 13 20:31:32.696115 containerd[1444]: time="2025-01-13T20:31:32.695694106Z" level=info msg="TearDown network for sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\" successfully" Jan 13 20:31:32.696115 containerd[1444]: time="2025-01-13T20:31:32.695704147Z" level=info msg="StopPodSandbox for \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\" returns successfully" Jan 13 20:31:32.695089 systemd[1]: run-netns-cni\x2d011173fa\x2db845\x2d1c5d\x2daf4d\x2d090a34f182dd.mount: Deactivated successfully. Jan 13 20:31:32.696703 containerd[1444]: time="2025-01-13T20:31:32.696548088Z" level=info msg="StopPodSandbox for \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\"" Jan 13 20:31:32.696703 containerd[1444]: time="2025-01-13T20:31:32.696645779Z" level=info msg="TearDown network for sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" successfully" Jan 13 20:31:32.696703 containerd[1444]: time="2025-01-13T20:31:32.696657261Z" level=info msg="StopPodSandbox for \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" returns successfully" Jan 13 20:31:32.696929 kubelet[2613]: I0113 20:31:32.696773 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c" Jan 13 20:31:32.697200 containerd[1444]: time="2025-01-13T20:31:32.696967578Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\"" Jan 13 20:31:32.697200 containerd[1444]: time="2025-01-13T20:31:32.697053828Z" level=info msg="TearDown network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" successfully" Jan 13 20:31:32.697200 containerd[1444]: time="2025-01-13T20:31:32.697063989Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" returns successfully" Jan 13 20:31:32.697331 kubelet[2613]: E0113 20:31:32.697306 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:32.697723 containerd[1444]: time="2025-01-13T20:31:32.697620495Z" level=info msg="StopPodSandbox for \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\"" Jan 13 20:31:32.698203 containerd[1444]: time="2025-01-13T20:31:32.697709546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:5,}" Jan 13 20:31:32.698991 containerd[1444]: time="2025-01-13T20:31:32.698841881Z" level=info msg="Ensure that sandbox d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c in task-service has been cleanup successfully" Jan 13 20:31:32.701370 containerd[1444]: time="2025-01-13T20:31:32.701332338Z" level=info msg="TearDown network for sandbox \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\" successfully" Jan 13 20:31:32.701875 containerd[1444]: time="2025-01-13T20:31:32.701461113Z" level=info msg="StopPodSandbox for \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\" returns successfully" Jan 13 20:31:32.701875 containerd[1444]: time="2025-01-13T20:31:32.701720704Z" level=info msg="StopPodSandbox for \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\"" Jan 13 20:31:32.701875 containerd[1444]: time="2025-01-13T20:31:32.701804754Z" level=info msg="TearDown network for sandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\" successfully" Jan 13 20:31:32.701875 containerd[1444]: time="2025-01-13T20:31:32.701814955Z" level=info msg="StopPodSandbox for \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\" returns successfully" Jan 13 20:31:32.701793 systemd[1]: run-netns-cni\x2d83910f90\x2dad71\x2dba58\x2d0a64\x2d86c6d9c11d66.mount: Deactivated successfully. Jan 13 20:31:32.702111 containerd[1444]: time="2025-01-13T20:31:32.702028781Z" level=info msg="StopPodSandbox for \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\"" Jan 13 20:31:32.702137 containerd[1444]: time="2025-01-13T20:31:32.702112511Z" level=info msg="TearDown network for sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\" successfully" Jan 13 20:31:32.702137 containerd[1444]: time="2025-01-13T20:31:32.702123512Z" level=info msg="StopPodSandbox for \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\" returns successfully" Jan 13 20:31:32.702455 containerd[1444]: time="2025-01-13T20:31:32.702395664Z" level=info msg="StopPodSandbox for \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\"" Jan 13 20:31:32.702455 containerd[1444]: time="2025-01-13T20:31:32.702480154Z" level=info msg="TearDown network for sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" successfully" Jan 13 20:31:32.702455 containerd[1444]: time="2025-01-13T20:31:32.702491636Z" level=info msg="StopPodSandbox for \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" returns successfully" Jan 13 20:31:32.702770 kubelet[2613]: I0113 20:31:32.702551 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c" Jan 13 20:31:32.704014 containerd[1444]: time="2025-01-13T20:31:32.703636292Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\"" Jan 13 20:31:32.704014 containerd[1444]: time="2025-01-13T20:31:32.703782550Z" level=info msg="TearDown network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" successfully" Jan 13 20:31:32.704014 containerd[1444]: time="2025-01-13T20:31:32.703799752Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" returns successfully" Jan 13 20:31:32.704689 kubelet[2613]: E0113 20:31:32.704123 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:32.704753 containerd[1444]: time="2025-01-13T20:31:32.704723462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:5,}" Jan 13 20:31:32.705088 containerd[1444]: time="2025-01-13T20:31:32.704980692Z" level=info msg="StopPodSandbox for \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\"" Jan 13 20:31:32.705250 containerd[1444]: time="2025-01-13T20:31:32.705150272Z" level=info msg="Ensure that sandbox 810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c in task-service has been cleanup successfully" Jan 13 20:31:32.705658 containerd[1444]: time="2025-01-13T20:31:32.705533398Z" level=info msg="TearDown network for sandbox \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\" successfully" Jan 13 20:31:32.705658 containerd[1444]: time="2025-01-13T20:31:32.705557761Z" level=info msg="StopPodSandbox for \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\" returns successfully" Jan 13 20:31:32.706289 containerd[1444]: time="2025-01-13T20:31:32.706086944Z" level=info msg="StopPodSandbox for \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\"" Jan 13 20:31:32.706289 containerd[1444]: time="2025-01-13T20:31:32.706176315Z" level=info msg="TearDown network for sandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\" successfully" Jan 13 20:31:32.706289 containerd[1444]: time="2025-01-13T20:31:32.706186276Z" level=info msg="StopPodSandbox for \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\" returns successfully" Jan 13 20:31:32.706771 containerd[1444]: time="2025-01-13T20:31:32.706748823Z" level=info msg="StopPodSandbox for \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\"" Jan 13 20:31:32.707254 containerd[1444]: time="2025-01-13T20:31:32.706998133Z" level=info msg="TearDown network for sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\" successfully" Jan 13 20:31:32.707254 containerd[1444]: time="2025-01-13T20:31:32.707039938Z" level=info msg="StopPodSandbox for \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\" returns successfully" Jan 13 20:31:32.707365 containerd[1444]: time="2025-01-13T20:31:32.707343254Z" level=info msg="StopPodSandbox for \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\"" Jan 13 20:31:32.707462 containerd[1444]: time="2025-01-13T20:31:32.707441665Z" level=info msg="TearDown network for sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" successfully" Jan 13 20:31:32.707462 containerd[1444]: time="2025-01-13T20:31:32.707458507Z" level=info msg="StopPodSandbox for \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" returns successfully" Jan 13 20:31:32.707784 systemd[1]: run-netns-cni\x2dc729264d\x2d3406\x2d255b\x2d7c3c\x2dd8d048541275.mount: Deactivated successfully. Jan 13 20:31:32.707898 containerd[1444]: time="2025-01-13T20:31:32.707817830Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\"" Jan 13 20:31:32.708124 containerd[1444]: time="2025-01-13T20:31:32.707946646Z" level=info msg="TearDown network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" successfully" Jan 13 20:31:32.708124 containerd[1444]: time="2025-01-13T20:31:32.707964008Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" returns successfully" Jan 13 20:31:32.708999 containerd[1444]: time="2025-01-13T20:31:32.708750661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:31:32.709075 kubelet[2613]: I0113 20:31:32.708891 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f" Jan 13 20:31:32.709953 containerd[1444]: time="2025-01-13T20:31:32.709922041Z" level=info msg="StopPodSandbox for \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\"" Jan 13 20:31:32.710117 containerd[1444]: time="2025-01-13T20:31:32.710085100Z" level=info msg="Ensure that sandbox de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f in task-service has been cleanup successfully" Jan 13 20:31:32.710316 containerd[1444]: time="2025-01-13T20:31:32.710288685Z" level=info msg="TearDown network for sandbox \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\" successfully" Jan 13 20:31:32.710316 containerd[1444]: time="2025-01-13T20:31:32.710309927Z" level=info msg="StopPodSandbox for \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\" returns successfully" Jan 13 20:31:32.711164 containerd[1444]: time="2025-01-13T20:31:32.711139266Z" level=info msg="StopPodSandbox for \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\"" Jan 13 20:31:32.711566 containerd[1444]: time="2025-01-13T20:31:32.711473586Z" level=info msg="TearDown network for sandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\" successfully" Jan 13 20:31:32.711566 containerd[1444]: time="2025-01-13T20:31:32.711505430Z" level=info msg="StopPodSandbox for \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\" returns successfully" Jan 13 20:31:32.712337 containerd[1444]: time="2025-01-13T20:31:32.712286843Z" level=info msg="StopPodSandbox for \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\"" Jan 13 20:31:32.712449 containerd[1444]: time="2025-01-13T20:31:32.712398456Z" level=info msg="TearDown network for sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\" successfully" Jan 13 20:31:32.712449 containerd[1444]: time="2025-01-13T20:31:32.712410417Z" level=info msg="StopPodSandbox for \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\" returns successfully" Jan 13 20:31:32.713025 containerd[1444]: time="2025-01-13T20:31:32.712993887Z" level=info msg="StopPodSandbox for \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\"" Jan 13 20:31:32.713094 containerd[1444]: time="2025-01-13T20:31:32.713079817Z" level=info msg="TearDown network for sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" successfully" Jan 13 20:31:32.713094 containerd[1444]: time="2025-01-13T20:31:32.713091979Z" level=info msg="StopPodSandbox for \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" returns successfully" Jan 13 20:31:32.714434 containerd[1444]: time="2025-01-13T20:31:32.713847629Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\"" Jan 13 20:31:32.714434 containerd[1444]: time="2025-01-13T20:31:32.713922558Z" level=info msg="TearDown network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" successfully" Jan 13 20:31:32.714434 containerd[1444]: time="2025-01-13T20:31:32.713932159Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" returns successfully" Jan 13 20:31:32.716438 containerd[1444]: time="2025-01-13T20:31:32.715965161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:31:32.716534 kubelet[2613]: I0113 20:31:32.716115 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:31:32.716534 kubelet[2613]: I0113 20:31:32.716129 2613 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3" Jan 13 20:31:32.716726 containerd[1444]: time="2025-01-13T20:31:32.716640601Z" level=info msg="StopPodSandbox for \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\"" Jan 13 20:31:32.716819 containerd[1444]: time="2025-01-13T20:31:32.716783978Z" level=info msg="Ensure that sandbox 407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3 in task-service has been cleanup successfully" Jan 13 20:31:32.717290 containerd[1444]: time="2025-01-13T20:31:32.717266556Z" level=info msg="TearDown network for sandbox \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\" successfully" Jan 13 20:31:32.717335 containerd[1444]: time="2025-01-13T20:31:32.717285078Z" level=info msg="StopPodSandbox for \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\" returns successfully" Jan 13 20:31:32.717914 containerd[1444]: time="2025-01-13T20:31:32.717814781Z" level=info msg="StopPodSandbox for \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\"" Jan 13 20:31:32.717914 containerd[1444]: time="2025-01-13T20:31:32.718138780Z" level=info msg="TearDown network for sandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\" successfully" Jan 13 20:31:32.717914 containerd[1444]: time="2025-01-13T20:31:32.718156462Z" level=info msg="StopPodSandbox for \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\" returns successfully" Jan 13 20:31:32.719430 containerd[1444]: time="2025-01-13T20:31:32.719035727Z" level=info msg="StopPodSandbox for \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\"" Jan 13 20:31:32.719430 containerd[1444]: time="2025-01-13T20:31:32.719128338Z" level=info msg="TearDown network for sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\" successfully" Jan 13 20:31:32.719430 containerd[1444]: time="2025-01-13T20:31:32.719138659Z" level=info msg="StopPodSandbox for \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\" returns successfully" Jan 13 20:31:32.719564 containerd[1444]: time="2025-01-13T20:31:32.719509983Z" level=info msg="StopPodSandbox for \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\"" Jan 13 20:31:32.719690 containerd[1444]: time="2025-01-13T20:31:32.719630598Z" level=info msg="TearDown network for sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" successfully" Jan 13 20:31:32.719690 containerd[1444]: time="2025-01-13T20:31:32.719647040Z" level=info msg="StopPodSandbox for \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" returns successfully" Jan 13 20:31:32.720052 containerd[1444]: time="2025-01-13T20:31:32.719999442Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\"" Jan 13 20:31:32.720123 containerd[1444]: time="2025-01-13T20:31:32.720087092Z" level=info msg="TearDown network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" successfully" Jan 13 20:31:32.720123 containerd[1444]: time="2025-01-13T20:31:32.720098173Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" returns successfully" Jan 13 20:31:32.720611 containerd[1444]: time="2025-01-13T20:31:32.720508862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:5,}" Jan 13 20:31:32.727910 kubelet[2613]: E0113 20:31:32.727781 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:33.135073 systemd-networkd[1368]: cali327e43836ea: Link UP Jan 13 20:31:33.135233 systemd-networkd[1368]: cali327e43836ea: Gained carrier Jan 13 20:31:33.147167 kubelet[2613]: I0113 20:31:33.147120 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-kgvnf" podStartSLOduration=2.9172937020000003 podStartE2EDuration="13.14707731s" podCreationTimestamp="2025-01-13 20:31:20 +0000 UTC" firstStartedPulling="2025-01-13 20:31:21.271670494 +0000 UTC m=+19.883123440" lastFinishedPulling="2025-01-13 20:31:31.501454062 +0000 UTC m=+30.112907048" observedRunningTime="2025-01-13 20:31:31.68180798 +0000 UTC m=+30.293260966" watchObservedRunningTime="2025-01-13 20:31:33.14707731 +0000 UTC m=+31.758530376" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:32.746 [INFO][4562] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:32.842 [INFO][4562] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0 calico-kube-controllers-85d666cdf5- calico-system b42b7cb4-9adc-45b0-a43d-a62a51c30a4e 737 0 2025-01-13 20:31:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85d666cdf5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85d666cdf5-4jgmf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali327e43836ea [] []}} ContainerID="80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" Namespace="calico-system" Pod="calico-kube-controllers-85d666cdf5-4jgmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:32.842 [INFO][4562] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" Namespace="calico-system" Pod="calico-kube-controllers-85d666cdf5-4jgmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.068 [INFO][4651] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" HandleID="k8s-pod-network.80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" Workload="localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.089 [INFO][4651] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" HandleID="k8s-pod-network.80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" Workload="localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038bae0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85d666cdf5-4jgmf", "timestamp":"2025-01-13 20:31:33.068922431 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.089 [INFO][4651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.090 [INFO][4651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.090 [INFO][4651] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.091 [INFO][4651] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" host="localhost" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.098 [INFO][4651] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.102 [INFO][4651] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.104 [INFO][4651] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.106 [INFO][4651] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.106 [INFO][4651] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" host="localhost" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.108 [INFO][4651] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.111 [INFO][4651] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" host="localhost" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.118 [INFO][4651] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" host="localhost" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.118 [INFO][4651] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" host="localhost" Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.118 [INFO][4651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:31:33.148603 containerd[1444]: 2025-01-13 20:31:33.118 [INFO][4651] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" HandleID="k8s-pod-network.80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" Workload="localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0" Jan 13 20:31:33.149124 containerd[1444]: 2025-01-13 20:31:33.122 [INFO][4562] cni-plugin/k8s.go 386: Populated endpoint ContainerID="80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" Namespace="calico-system" Pod="calico-kube-controllers-85d666cdf5-4jgmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0", GenerateName:"calico-kube-controllers-85d666cdf5-", Namespace:"calico-system", SelfLink:"", UID:"b42b7cb4-9adc-45b0-a43d-a62a51c30a4e", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85d666cdf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85d666cdf5-4jgmf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali327e43836ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.149124 containerd[1444]: 2025-01-13 20:31:33.122 [INFO][4562] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" Namespace="calico-system" Pod="calico-kube-controllers-85d666cdf5-4jgmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0" Jan 13 20:31:33.149124 containerd[1444]: 2025-01-13 20:31:33.122 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali327e43836ea ContainerID="80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" Namespace="calico-system" Pod="calico-kube-controllers-85d666cdf5-4jgmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0" Jan 13 20:31:33.149124 containerd[1444]: 2025-01-13 20:31:33.134 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" Namespace="calico-system" Pod="calico-kube-controllers-85d666cdf5-4jgmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0" Jan 13 20:31:33.149124 containerd[1444]: 2025-01-13 20:31:33.134 [INFO][4562] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" Namespace="calico-system" Pod="calico-kube-controllers-85d666cdf5-4jgmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0", GenerateName:"calico-kube-controllers-85d666cdf5-", Namespace:"calico-system", SelfLink:"", UID:"b42b7cb4-9adc-45b0-a43d-a62a51c30a4e", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85d666cdf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb", Pod:"calico-kube-controllers-85d666cdf5-4jgmf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali327e43836ea", MAC:"22:d8:b7:80:d2:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.149124 containerd[1444]: 2025-01-13 20:31:33.145 [INFO][4562] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb" Namespace="calico-system" Pod="calico-kube-controllers-85d666cdf5-4jgmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85d666cdf5--4jgmf-eth0" Jan 13 20:31:33.179073 systemd-networkd[1368]: cali496824c18f6: Link UP Jan 13 20:31:33.180195 systemd-networkd[1368]: cali496824c18f6: Gained carrier Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:32.859 [INFO][4605] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:32.900 [INFO][4605] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0 calico-apiserver-5d7f47bd54- calico-apiserver 1cfacfd0-9476-421d-9ba4-8948bbbe88e8 730 0 2025-01-13 20:31:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7f47bd54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d7f47bd54-kjr8q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali496824c18f6 [] []}} ContainerID="68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-kjr8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:32.900 [INFO][4605] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-kjr8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.072 [INFO][4672] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" HandleID="k8s-pod-network.68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" Workload="localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.092 [INFO][4672] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" HandleID="k8s-pod-network.68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" Workload="localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c58c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d7f47bd54-kjr8q", "timestamp":"2025-01-13 20:31:33.072086371 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.092 [INFO][4672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.118 [INFO][4672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.118 [INFO][4672] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.121 [INFO][4672] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" host="localhost" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.129 [INFO][4672] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.136 [INFO][4672] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.138 [INFO][4672] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.141 [INFO][4672] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.141 [INFO][4672] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" host="localhost" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.143 [INFO][4672] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.152 [INFO][4672] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" host="localhost" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.162 [INFO][4672] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" host="localhost" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.162 [INFO][4672] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" host="localhost" Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.162 [INFO][4672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:31:33.197315 containerd[1444]: 2025-01-13 20:31:33.162 [INFO][4672] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" HandleID="k8s-pod-network.68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" Workload="localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0" Jan 13 20:31:33.198042 containerd[1444]: 2025-01-13 20:31:33.174 [INFO][4605] cni-plugin/k8s.go 386: Populated endpoint ContainerID="68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-kjr8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0", GenerateName:"calico-apiserver-5d7f47bd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"1cfacfd0-9476-421d-9ba4-8948bbbe88e8", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7f47bd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d7f47bd54-kjr8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali496824c18f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.198042 containerd[1444]: 2025-01-13 20:31:33.174 [INFO][4605] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-kjr8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0" Jan 13 20:31:33.198042 containerd[1444]: 2025-01-13 20:31:33.174 [INFO][4605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali496824c18f6 ContainerID="68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-kjr8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0" Jan 13 20:31:33.198042 containerd[1444]: 2025-01-13 20:31:33.176 [INFO][4605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-kjr8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0" Jan 13 20:31:33.198042 containerd[1444]: 2025-01-13 20:31:33.178 [INFO][4605] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-kjr8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0", GenerateName:"calico-apiserver-5d7f47bd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"1cfacfd0-9476-421d-9ba4-8948bbbe88e8", ResourceVersion:"730", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7f47bd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f", Pod:"calico-apiserver-5d7f47bd54-kjr8q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali496824c18f6", MAC:"be:e3:68:6e:49:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.198042 containerd[1444]: 2025-01-13 20:31:33.192 [INFO][4605] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-kjr8q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--kjr8q-eth0" Jan 13 20:31:33.228514 systemd-networkd[1368]: cali387ca4e96ac: Link UP Jan 13 20:31:33.229774 systemd-networkd[1368]: cali387ca4e96ac: Gained carrier Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:32.770 [INFO][4589] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:32.842 [INFO][4589] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--hlvx2-eth0 coredns-76f75df574- kube-system e9fd979c-1ebe-4de8-a229-23c188a43678 738 0 2025-01-13 20:31:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-hlvx2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali387ca4e96ac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" Namespace="kube-system" Pod="coredns-76f75df574-hlvx2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hlvx2-" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:32.842 [INFO][4589] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" Namespace="kube-system" Pod="coredns-76f75df574-hlvx2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hlvx2-eth0" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.073 [INFO][4649] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" HandleID="k8s-pod-network.6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" Workload="localhost-k8s-coredns--76f75df574--hlvx2-eth0" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.091 [INFO][4649] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" HandleID="k8s-pod-network.6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" Workload="localhost-k8s-coredns--76f75df574--hlvx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003679e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-hlvx2", "timestamp":"2025-01-13 20:31:33.073037846 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.092 [INFO][4649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.162 [INFO][4649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.162 [INFO][4649] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.165 [INFO][4649] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" host="localhost" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.171 [INFO][4649] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.182 [INFO][4649] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.184 [INFO][4649] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.190 [INFO][4649] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.191 [INFO][4649] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" host="localhost" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.194 [INFO][4649] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.202 [INFO][4649] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" host="localhost" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.208 [INFO][4649] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" host="localhost" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.208 [INFO][4649] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" host="localhost" Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.208 [INFO][4649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:31:33.267361 containerd[1444]: 2025-01-13 20:31:33.208 [INFO][4649] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" HandleID="k8s-pod-network.6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" Workload="localhost-k8s-coredns--76f75df574--hlvx2-eth0" Jan 13 20:31:33.267992 containerd[1444]: 2025-01-13 20:31:33.219 [INFO][4589] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" Namespace="kube-system" Pod="coredns-76f75df574-hlvx2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hlvx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hlvx2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e9fd979c-1ebe-4de8-a229-23c188a43678", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-hlvx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali387ca4e96ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.267992 containerd[1444]: 2025-01-13 20:31:33.220 [INFO][4589] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" Namespace="kube-system" Pod="coredns-76f75df574-hlvx2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hlvx2-eth0" Jan 13 20:31:33.267992 containerd[1444]: 2025-01-13 20:31:33.220 [INFO][4589] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali387ca4e96ac ContainerID="6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" Namespace="kube-system" Pod="coredns-76f75df574-hlvx2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hlvx2-eth0" Jan 13 20:31:33.267992 containerd[1444]: 2025-01-13 20:31:33.231 [INFO][4589] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" Namespace="kube-system" Pod="coredns-76f75df574-hlvx2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hlvx2-eth0" Jan 13 20:31:33.267992 containerd[1444]: 2025-01-13 20:31:33.231 [INFO][4589] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" Namespace="kube-system" Pod="coredns-76f75df574-hlvx2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hlvx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hlvx2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e9fd979c-1ebe-4de8-a229-23c188a43678", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df", Pod:"coredns-76f75df574-hlvx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali387ca4e96ac", MAC:"86:c6:d2:17:93:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.267992 containerd[1444]: 2025-01-13 20:31:33.245 [INFO][4589] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df" Namespace="kube-system" Pod="coredns-76f75df574-hlvx2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hlvx2-eth0" Jan 13 20:31:33.274477 containerd[1444]: time="2025-01-13T20:31:33.273791630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:31:33.274477 containerd[1444]: time="2025-01-13T20:31:33.273855917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:31:33.274477 containerd[1444]: time="2025-01-13T20:31:33.273871359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.274477 containerd[1444]: time="2025-01-13T20:31:33.273964650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.300756 containerd[1444]: time="2025-01-13T20:31:33.237834705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:31:33.300756 containerd[1444]: time="2025-01-13T20:31:33.300283496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:31:33.300756 containerd[1444]: time="2025-01-13T20:31:33.300295897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.300756 containerd[1444]: time="2025-01-13T20:31:33.300416832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.314797 systemd-networkd[1368]: calicb1e7df543f: Link UP Jan 13 20:31:33.316245 systemd-networkd[1368]: calicb1e7df543f: Gained carrier Jan 13 20:31:33.320582 systemd[1]: Started cri-containerd-68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f.scope - libcontainer container 68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f. Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:32.752 [INFO][4572] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:32.842 [INFO][4572] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--v72tl-eth0 coredns-76f75df574- kube-system 998cfadf-febb-495f-927c-5b5b4a548933 735 0 2025-01-13 20:31:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-v72tl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb1e7df543f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" Namespace="kube-system" Pod="coredns-76f75df574-v72tl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v72tl-" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:32.842 [INFO][4572] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" Namespace="kube-system" Pod="coredns-76f75df574-v72tl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v72tl-eth0" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.077 [INFO][4652] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" HandleID="k8s-pod-network.40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" Workload="localhost-k8s-coredns--76f75df574--v72tl-eth0" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.092 [INFO][4652] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" HandleID="k8s-pod-network.40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" Workload="localhost-k8s-coredns--76f75df574--v72tl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003494e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-v72tl", "timestamp":"2025-01-13 20:31:33.077666803 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.092 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.209 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.211 [INFO][4652] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.217 [INFO][4652] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" host="localhost" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.249 [INFO][4652] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.265 [INFO][4652] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.272 [INFO][4652] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.276 [INFO][4652] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.276 [INFO][4652] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" host="localhost" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.279 [INFO][4652] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1 Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.293 [INFO][4652] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" host="localhost" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.301 [INFO][4652] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" host="localhost" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.301 [INFO][4652] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" host="localhost" Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.301 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:31:33.348149 containerd[1444]: 2025-01-13 20:31:33.302 [INFO][4652] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" HandleID="k8s-pod-network.40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" Workload="localhost-k8s-coredns--76f75df574--v72tl-eth0" Jan 13 20:31:33.348791 containerd[1444]: 2025-01-13 20:31:33.306 [INFO][4572] cni-plugin/k8s.go 386: Populated endpoint ContainerID="40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" Namespace="kube-system" Pod="coredns-76f75df574-v72tl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v72tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--v72tl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"998cfadf-febb-495f-927c-5b5b4a548933", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-v72tl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb1e7df543f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.348791 containerd[1444]: 2025-01-13 20:31:33.306 [INFO][4572] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" Namespace="kube-system" Pod="coredns-76f75df574-v72tl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v72tl-eth0" Jan 13 20:31:33.348791 containerd[1444]: 2025-01-13 20:31:33.306 [INFO][4572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb1e7df543f ContainerID="40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" Namespace="kube-system" Pod="coredns-76f75df574-v72tl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v72tl-eth0" Jan 13 20:31:33.348791 containerd[1444]: 2025-01-13 20:31:33.321 [INFO][4572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" Namespace="kube-system" Pod="coredns-76f75df574-v72tl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v72tl-eth0" Jan 13 20:31:33.348791 containerd[1444]: 2025-01-13 20:31:33.325 [INFO][4572] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" Namespace="kube-system" Pod="coredns-76f75df574-v72tl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v72tl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--v72tl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"998cfadf-febb-495f-927c-5b5b4a548933", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1", Pod:"coredns-76f75df574-v72tl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb1e7df543f", MAC:"72:42:d1:3f:d1:6b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.348791 containerd[1444]: 2025-01-13 20:31:33.342 [INFO][4572] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1" Namespace="kube-system" Pod="coredns-76f75df574-v72tl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--v72tl-eth0" Jan 13 20:31:33.352885 containerd[1444]: time="2025-01-13T20:31:33.350721042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:31:33.352885 containerd[1444]: time="2025-01-13T20:31:33.350780849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:31:33.352885 containerd[1444]: time="2025-01-13T20:31:33.350792650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.352885 containerd[1444]: time="2025-01-13T20:31:33.350872340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.363423 systemd[1]: Started cri-containerd-80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb.scope - libcontainer container 80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb. Jan 13 20:31:33.370857 systemd-networkd[1368]: caliacb92faa5d8: Link UP Jan 13 20:31:33.371046 systemd-networkd[1368]: caliacb92faa5d8: Gained carrier Jan 13 20:31:33.371910 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:31:33.390943 containerd[1444]: time="2025-01-13T20:31:33.390284040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:31:33.390943 containerd[1444]: time="2025-01-13T20:31:33.390360449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:31:33.390943 containerd[1444]: time="2025-01-13T20:31:33.390373130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.390943 containerd[1444]: time="2025-01-13T20:31:33.390479623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:32.855 [INFO][4607] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:32.878 [INFO][4607] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xmsjq-eth0 csi-node-driver- calico-system d62b149c-90ef-4582-bf5b-b3dad659f453 659 0 2025-01-13 20:31:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xmsjq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliacb92faa5d8 [] []}} ContainerID="28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" Namespace="calico-system" Pod="csi-node-driver-xmsjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--xmsjq-" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:32.878 [INFO][4607] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" Namespace="calico-system" Pod="csi-node-driver-xmsjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--xmsjq-eth0" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.072 [INFO][4671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" HandleID="k8s-pod-network.28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" Workload="localhost-k8s-csi--node--driver--xmsjq-eth0" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.093 [INFO][4671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" HandleID="k8s-pod-network.28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" Workload="localhost-k8s-csi--node--driver--xmsjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f0250), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xmsjq", "timestamp":"2025-01-13 20:31:33.072094532 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.094 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.302 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.302 [INFO][4671] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.307 [INFO][4671] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" host="localhost" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.317 [INFO][4671] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.326 [INFO][4671] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.329 [INFO][4671] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.335 [INFO][4671] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.335 [INFO][4671] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" host="localhost" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.339 [INFO][4671] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.348 [INFO][4671] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" host="localhost" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.359 [INFO][4671] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" host="localhost" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.359 [INFO][4671] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" host="localhost" Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.359 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:31:33.398564 containerd[1444]: 2025-01-13 20:31:33.359 [INFO][4671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" HandleID="k8s-pod-network.28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" Workload="localhost-k8s-csi--node--driver--xmsjq-eth0" Jan 13 20:31:33.399464 containerd[1444]: 2025-01-13 20:31:33.362 [INFO][4607] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" Namespace="calico-system" Pod="csi-node-driver-xmsjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--xmsjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xmsjq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d62b149c-90ef-4582-bf5b-b3dad659f453", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xmsjq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliacb92faa5d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.399464 containerd[1444]: 2025-01-13 20:31:33.362 [INFO][4607] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" Namespace="calico-system" Pod="csi-node-driver-xmsjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--xmsjq-eth0" Jan 13 20:31:33.399464 containerd[1444]: 2025-01-13 20:31:33.362 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacb92faa5d8 ContainerID="28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" Namespace="calico-system" Pod="csi-node-driver-xmsjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--xmsjq-eth0" Jan 13 20:31:33.399464 containerd[1444]: 2025-01-13 20:31:33.372 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" Namespace="calico-system" Pod="csi-node-driver-xmsjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--xmsjq-eth0" Jan 13 20:31:33.399464 containerd[1444]: 2025-01-13 20:31:33.381 [INFO][4607] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" Namespace="calico-system" Pod="csi-node-driver-xmsjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--xmsjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xmsjq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d62b149c-90ef-4582-bf5b-b3dad659f453", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa", Pod:"csi-node-driver-xmsjq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliacb92faa5d8", MAC:"76:52:b7:4b:a0:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.399464 containerd[1444]: 2025-01-13 20:31:33.394 [INFO][4607] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa" Namespace="calico-system" Pod="csi-node-driver-xmsjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--xmsjq-eth0" Jan 13 20:31:33.405093 systemd[1]: Started cri-containerd-6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df.scope - libcontainer container 6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df. Jan 13 20:31:33.416129 systemd[1]: Started cri-containerd-40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1.scope - libcontainer container 40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1. Jan 13 20:31:33.421023 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:31:33.438732 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:31:33.439233 containerd[1444]: time="2025-01-13T20:31:33.439152997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-kjr8q,Uid:1cfacfd0-9476-421d-9ba4-8948bbbe88e8,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f\"" Jan 13 20:31:33.441587 containerd[1444]: time="2025-01-13T20:31:33.441421550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:31:33.452679 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:31:33.456660 systemd-networkd[1368]: calide6473d1b79: Link UP Jan 13 20:31:33.457953 systemd-networkd[1368]: calide6473d1b79: Gained carrier Jan 13 20:31:33.479944 containerd[1444]: time="2025-01-13T20:31:33.479864933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hlvx2,Uid:e9fd979c-1ebe-4de8-a229-23c188a43678,Namespace:kube-system,Attempt:5,} returns sandbox id \"6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df\"" Jan 13 20:31:33.485202 kubelet[2613]: E0113 20:31:33.485173 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:33.492426 containerd[1444]: time="2025-01-13T20:31:33.492295108Z" level=info msg="CreateContainer within sandbox \"6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:31:33.493311 containerd[1444]: time="2025-01-13T20:31:33.493099325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:31:33.493311 containerd[1444]: time="2025-01-13T20:31:33.493175534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:31:33.493311 containerd[1444]: time="2025-01-13T20:31:33.493188336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.493311 containerd[1444]: time="2025-01-13T20:31:33.493270825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:32.833 [INFO][4627] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:32.851 [INFO][4627] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0 calico-apiserver-5d7f47bd54- calico-apiserver ef49bc1a-4bb1-4428-954b-8600a024bc5a 736 0 2025-01-13 20:31:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7f47bd54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d7f47bd54-dc6xn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calide6473d1b79 [] []}} ContainerID="0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-dc6xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:32.851 [INFO][4627] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-dc6xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.079 [INFO][4650] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" HandleID="k8s-pod-network.0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" Workload="localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.095 [INFO][4650] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" HandleID="k8s-pod-network.0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" Workload="localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d7f47bd54-dc6xn", "timestamp":"2025-01-13 20:31:33.079005123 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.096 [INFO][4650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.360 [INFO][4650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.360 [INFO][4650] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.366 [INFO][4650] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" host="localhost" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.377 [INFO][4650] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.389 [INFO][4650] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.399 [INFO][4650] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.403 [INFO][4650] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.403 [INFO][4650] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" host="localhost" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.411 [INFO][4650] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.426 [INFO][4650] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" host="localhost" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.445 [INFO][4650] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" host="localhost" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.445 [INFO][4650] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" host="localhost" Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.445 [INFO][4650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:31:33.498765 containerd[1444]: 2025-01-13 20:31:33.445 [INFO][4650] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" HandleID="k8s-pod-network.0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" Workload="localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0" Jan 13 20:31:33.500357 containerd[1444]: 2025-01-13 20:31:33.453 [INFO][4627] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-dc6xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0", GenerateName:"calico-apiserver-5d7f47bd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef49bc1a-4bb1-4428-954b-8600a024bc5a", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7f47bd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d7f47bd54-dc6xn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide6473d1b79", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.500357 containerd[1444]: 2025-01-13 20:31:33.453 [INFO][4627] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-dc6xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0" Jan 13 20:31:33.500357 containerd[1444]: 2025-01-13 20:31:33.453 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide6473d1b79 ContainerID="0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-dc6xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0" Jan 13 20:31:33.500357 containerd[1444]: 2025-01-13 20:31:33.459 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-dc6xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0" Jan 13 20:31:33.500357 containerd[1444]: 2025-01-13 20:31:33.461 [INFO][4627] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-dc6xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0", GenerateName:"calico-apiserver-5d7f47bd54-", Namespace:"calico-apiserver", SelfLink:"", UID:"ef49bc1a-4bb1-4428-954b-8600a024bc5a", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 31, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7f47bd54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b", Pod:"calico-apiserver-5d7f47bd54-dc6xn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide6473d1b79", MAC:"12:89:c2:9f:5f:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:31:33.500357 containerd[1444]: 2025-01-13 20:31:33.480 [INFO][4627] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b" Namespace="calico-apiserver" Pod="calico-apiserver-5d7f47bd54-dc6xn" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d7f47bd54--dc6xn-eth0" Jan 13 20:31:33.500357 containerd[1444]: time="2025-01-13T20:31:33.498913744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85d666cdf5-4jgmf,Uid:b42b7cb4-9adc-45b0-a43d-a62a51c30a4e,Namespace:calico-system,Attempt:5,} returns sandbox id \"80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb\"" Jan 13 20:31:33.515417 containerd[1444]: time="2025-01-13T20:31:33.514391566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v72tl,Uid:998cfadf-febb-495f-927c-5b5b4a548933,Namespace:kube-system,Attempt:5,} returns sandbox id \"40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1\"" Jan 13 20:31:33.518086 kubelet[2613]: E0113 20:31:33.518042 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:33.523608 containerd[1444]: time="2025-01-13T20:31:33.523561908Z" level=info msg="CreateContainer within sandbox \"40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:31:33.527577 systemd[1]: Started cri-containerd-28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa.scope - libcontainer container 28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa. Jan 13 20:31:33.542505 containerd[1444]: time="2025-01-13T20:31:33.541136502Z" level=info msg="CreateContainer within sandbox \"6a795dbdac694c1215640086fad34ba3fe6ab281759531b99ab603fbff6492df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a2412df52d438f9cd0e0991c387c78e3f24347d27b3927cac8ea4c2027c4575\"" Jan 13 20:31:33.543620 containerd[1444]: time="2025-01-13T20:31:33.543588677Z" level=info msg="StartContainer for \"8a2412df52d438f9cd0e0991c387c78e3f24347d27b3927cac8ea4c2027c4575\"" Jan 13 20:31:33.554047 containerd[1444]: time="2025-01-13T20:31:33.553949043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:31:33.554196 containerd[1444]: time="2025-01-13T20:31:33.554021172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:31:33.554196 containerd[1444]: time="2025-01-13T20:31:33.554033173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.554494 containerd[1444]: time="2025-01-13T20:31:33.554363693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:31:33.560255 containerd[1444]: time="2025-01-13T20:31:33.560211876Z" level=info msg="CreateContainer within sandbox \"40c22edc54d82453edeabfc08555e4f461a0c2e70a44611c31c5d351614b1bb1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2f6d78333f100f6f24a85ab876e9d05ddcf8e07214cfa346e20ae33e74e5e799\"" Jan 13 20:31:33.562462 containerd[1444]: time="2025-01-13T20:31:33.562367775Z" level=info msg="StartContainer for \"2f6d78333f100f6f24a85ab876e9d05ddcf8e07214cfa346e20ae33e74e5e799\"" Jan 13 20:31:33.579238 systemd[1]: Started cri-containerd-0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b.scope - libcontainer container 0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b. Jan 13 20:31:33.596095 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:31:33.602576 systemd[1]: Started cri-containerd-8a2412df52d438f9cd0e0991c387c78e3f24347d27b3927cac8ea4c2027c4575.scope - libcontainer container 8a2412df52d438f9cd0e0991c387c78e3f24347d27b3927cac8ea4c2027c4575. Jan 13 20:31:33.606155 systemd[1]: Started cri-containerd-2f6d78333f100f6f24a85ab876e9d05ddcf8e07214cfa346e20ae33e74e5e799.scope - libcontainer container 2f6d78333f100f6f24a85ab876e9d05ddcf8e07214cfa346e20ae33e74e5e799. Jan 13 20:31:33.613686 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:31:33.616523 containerd[1444]: time="2025-01-13T20:31:33.616482484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xmsjq,Uid:d62b149c-90ef-4582-bf5b-b3dad659f453,Namespace:calico-system,Attempt:5,} returns sandbox id \"28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa\"" Jan 13 20:31:33.639586 containerd[1444]: time="2025-01-13T20:31:33.639245781Z" level=info msg="StartContainer for \"2f6d78333f100f6f24a85ab876e9d05ddcf8e07214cfa346e20ae33e74e5e799\" returns successfully" Jan 13 20:31:33.656970 containerd[1444]: time="2025-01-13T20:31:33.656847378Z" level=info msg="StartContainer for \"8a2412df52d438f9cd0e0991c387c78e3f24347d27b3927cac8ea4c2027c4575\" returns successfully" Jan 13 20:31:33.656970 containerd[1444]: time="2025-01-13T20:31:33.656934949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7f47bd54-dc6xn,Uid:ef49bc1a-4bb1-4428-954b-8600a024bc5a,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b\"" Jan 13 20:31:33.665350 systemd[1]: run-netns-cni\x2d6033f057\x2d532f\x2d211c\x2d4c49\x2d793e89c5f507.mount: Deactivated successfully. Jan 13 20:31:33.665468 systemd[1]: run-netns-cni\x2dee6f51ab\x2d21f2\x2d9e42\x2d5300\x2d41317d39f88c.mount: Deactivated successfully. Jan 13 20:31:33.721701 kubelet[2613]: E0113 20:31:33.721352 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:33.729303 kubelet[2613]: E0113 20:31:33.729200 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:33.758167 kubelet[2613]: I0113 20:31:33.758045 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-v72tl" podStartSLOduration=18.758004544 podStartE2EDuration="18.758004544s" podCreationTimestamp="2025-01-13 20:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:31:33.732424187 +0000 UTC m=+32.343877173" watchObservedRunningTime="2025-01-13 20:31:33.758004544 +0000 UTC m=+32.369457530" Jan 13 20:31:33.758550 kubelet[2613]: I0113 20:31:33.758454 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hlvx2" podStartSLOduration=18.758423434 podStartE2EDuration="18.758423434s" podCreationTimestamp="2025-01-13 20:31:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:31:33.750884168 +0000 UTC m=+32.362337154" watchObservedRunningTime="2025-01-13 20:31:33.758423434 +0000 UTC m=+32.369876380" Jan 13 20:31:34.252554 systemd-networkd[1368]: cali327e43836ea: Gained IPv6LL Jan 13 20:31:34.446516 systemd-networkd[1368]: cali496824c18f6: Gained IPv6LL Jan 13 20:31:34.755445 kubelet[2613]: E0113 20:31:34.754462 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:34.755445 kubelet[2613]: E0113 20:31:34.754531 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:34.764601 systemd-networkd[1368]: caliacb92faa5d8: Gained IPv6LL Jan 13 20:31:34.956982 systemd-networkd[1368]: calicb1e7df543f: Gained IPv6LL Jan 13 20:31:35.012988 containerd[1444]: time="2025-01-13T20:31:35.012529894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:35.013334 containerd[1444]: time="2025-01-13T20:31:35.013015170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 13 20:31:35.014083 containerd[1444]: time="2025-01-13T20:31:35.014024846Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:35.016978 containerd[1444]: time="2025-01-13T20:31:35.016940985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:35.017680 containerd[1444]: time="2025-01-13T20:31:35.017636597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.576171202s" Jan 13 20:31:35.017680 containerd[1444]: time="2025-01-13T20:31:35.017672360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 20:31:35.019770 containerd[1444]: time="2025-01-13T20:31:35.018404615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 20:31:35.020717 containerd[1444]: time="2025-01-13T20:31:35.020685546Z" level=info msg="CreateContainer within sandbox \"68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:31:35.022503 systemd-networkd[1368]: calide6473d1b79: Gained IPv6LL Jan 13 20:31:35.035278 containerd[1444]: time="2025-01-13T20:31:35.035052424Z" level=info msg="CreateContainer within sandbox \"68a74e6654f68f9ac3813c4b9886c978df2324348cc6f3f36651ddc5a8ad085f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"70e1186eab98a972904e76a1b1a7a5ca80665465b962fccbee113414c9d1725a\"" Jan 13 20:31:35.035840 containerd[1444]: time="2025-01-13T20:31:35.035812241Z" level=info msg="StartContainer for \"70e1186eab98a972904e76a1b1a7a5ca80665465b962fccbee113414c9d1725a\"" Jan 13 20:31:35.062593 systemd[1]: Started cri-containerd-70e1186eab98a972904e76a1b1a7a5ca80665465b962fccbee113414c9d1725a.scope - libcontainer container 70e1186eab98a972904e76a1b1a7a5ca80665465b962fccbee113414c9d1725a. Jan 13 20:31:35.084517 systemd-networkd[1368]: cali387ca4e96ac: Gained IPv6LL Jan 13 20:31:35.097559 containerd[1444]: time="2025-01-13T20:31:35.097502312Z" level=info msg="StartContainer for \"70e1186eab98a972904e76a1b1a7a5ca80665465b962fccbee113414c9d1725a\" returns successfully" Jan 13 20:31:35.758601 kubelet[2613]: E0113 20:31:35.758561 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:35.759184 kubelet[2613]: E0113 20:31:35.759083 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:35.773796 kubelet[2613]: I0113 20:31:35.773196 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d7f47bd54-kjr8q" podStartSLOduration=14.196043136 podStartE2EDuration="15.773150949s" podCreationTimestamp="2025-01-13 20:31:20 +0000 UTC" firstStartedPulling="2025-01-13 20:31:33.440997019 +0000 UTC m=+32.052450005" lastFinishedPulling="2025-01-13 20:31:35.018104832 +0000 UTC m=+33.629557818" observedRunningTime="2025-01-13 20:31:35.77143726 +0000 UTC m=+34.382890286" watchObservedRunningTime="2025-01-13 20:31:35.773150949 +0000 UTC m=+34.384603935" Jan 13 20:31:35.848535 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:50582.service - OpenSSH per-connection server daemon (10.0.0.1:50582). Jan 13 20:31:35.903679 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 50582 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:31:35.904323 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:35.909278 systemd-logind[1423]: New session 9 of user core. Jan 13 20:31:35.919581 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:31:36.224483 sshd[5300]: Connection closed by 10.0.0.1 port 50582 Jan 13 20:31:36.225089 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:36.231013 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:50582.service: Deactivated successfully. Jan 13 20:31:36.235573 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:31:36.238663 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:31:36.241081 systemd-logind[1423]: Removed session 9. Jan 13 20:31:36.593836 containerd[1444]: time="2025-01-13T20:31:36.593008139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:36.593836 containerd[1444]: time="2025-01-13T20:31:36.593785276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 13 20:31:36.594374 containerd[1444]: time="2025-01-13T20:31:36.594344596Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:36.600730 containerd[1444]: time="2025-01-13T20:31:36.600688099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:36.601375 containerd[1444]: time="2025-01-13T20:31:36.601342427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.58290249s" Jan 13 20:31:36.601375 containerd[1444]: time="2025-01-13T20:31:36.601376790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 13 20:31:36.602641 containerd[1444]: time="2025-01-13T20:31:36.602454388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:31:36.615446 containerd[1444]: time="2025-01-13T20:31:36.614160923Z" level=info msg="CreateContainer within sandbox \"80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 20:31:36.653822 containerd[1444]: time="2025-01-13T20:31:36.653767934Z" level=info msg="CreateContainer within sandbox \"80d653d707a221419585e5a6b81aea0704d0026001dbc420dad3dce07fe23bfb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"73f49d79cd804c08c7be20332cdfc244aab32d24e1a7377ecadf685418ebd13c\"" Jan 13 20:31:36.654360 containerd[1444]: time="2025-01-13T20:31:36.654336495Z" level=info msg="StartContainer for \"73f49d79cd804c08c7be20332cdfc244aab32d24e1a7377ecadf685418ebd13c\"" Jan 13 20:31:36.702568 systemd[1]: Started cri-containerd-73f49d79cd804c08c7be20332cdfc244aab32d24e1a7377ecadf685418ebd13c.scope - libcontainer container 73f49d79cd804c08c7be20332cdfc244aab32d24e1a7377ecadf685418ebd13c. Jan 13 20:31:36.737425 containerd[1444]: time="2025-01-13T20:31:36.736757271Z" level=info msg="StartContainer for \"73f49d79cd804c08c7be20332cdfc244aab32d24e1a7377ecadf685418ebd13c\" returns successfully" Jan 13 20:31:36.763992 kubelet[2613]: I0113 20:31:36.763959 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:31:36.766112 kubelet[2613]: E0113 20:31:36.766084 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:36.775779 kubelet[2613]: I0113 20:31:36.775724 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85d666cdf5-4jgmf" podStartSLOduration=12.674390626 podStartE2EDuration="15.775682192s" podCreationTimestamp="2025-01-13 20:31:21 +0000 UTC" firstStartedPulling="2025-01-13 20:31:33.500472052 +0000 UTC m=+32.111925038" lastFinishedPulling="2025-01-13 20:31:36.601763618 +0000 UTC m=+35.213216604" observedRunningTime="2025-01-13 20:31:36.775351207 +0000 UTC m=+35.386804193" watchObservedRunningTime="2025-01-13 20:31:36.775682192 +0000 UTC m=+35.387135138" Jan 13 20:31:37.773707 kubelet[2613]: I0113 20:31:37.773657 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:31:37.817767 containerd[1444]: time="2025-01-13T20:31:37.817710082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:37.818214 containerd[1444]: time="2025-01-13T20:31:37.818168754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 13 20:31:37.818933 containerd[1444]: time="2025-01-13T20:31:37.818910407Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:37.820995 containerd[1444]: time="2025-01-13T20:31:37.820961593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:37.821878 containerd[1444]: time="2025-01-13T20:31:37.821845455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.219359145s" Jan 13 20:31:37.821936 containerd[1444]: time="2025-01-13T20:31:37.821878338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 13 20:31:37.823124 containerd[1444]: time="2025-01-13T20:31:37.823092744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:31:37.825453 containerd[1444]: time="2025-01-13T20:31:37.825409708Z" level=info msg="CreateContainer within sandbox \"28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:31:37.839503 containerd[1444]: time="2025-01-13T20:31:37.839452225Z" level=info msg="CreateContainer within sandbox \"28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"488aa57ed13bf54b3714eb4c8a51846a391de026b48ad271a0b0552ea152bf87\"" Jan 13 20:31:37.839940 containerd[1444]: time="2025-01-13T20:31:37.839914938Z" level=info msg="StartContainer for \"488aa57ed13bf54b3714eb4c8a51846a391de026b48ad271a0b0552ea152bf87\"" Jan 13 20:31:37.874581 systemd[1]: Started cri-containerd-488aa57ed13bf54b3714eb4c8a51846a391de026b48ad271a0b0552ea152bf87.scope - libcontainer container 488aa57ed13bf54b3714eb4c8a51846a391de026b48ad271a0b0552ea152bf87. Jan 13 20:31:37.902504 containerd[1444]: time="2025-01-13T20:31:37.902465897Z" level=info msg="StartContainer for \"488aa57ed13bf54b3714eb4c8a51846a391de026b48ad271a0b0552ea152bf87\" returns successfully" Jan 13 20:31:38.048887 containerd[1444]: time="2025-01-13T20:31:38.048766867Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:38.049529 containerd[1444]: time="2025-01-13T20:31:38.049308984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 20:31:38.051602 containerd[1444]: time="2025-01-13T20:31:38.051563060Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 228.431913ms" Jan 13 20:31:38.051602 containerd[1444]: time="2025-01-13T20:31:38.051596462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 20:31:38.052764 containerd[1444]: time="2025-01-13T20:31:38.052737821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:31:38.054189 containerd[1444]: time="2025-01-13T20:31:38.054159639Z" level=info msg="CreateContainer within sandbox \"0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:31:38.065629 containerd[1444]: time="2025-01-13T20:31:38.065527064Z" level=info msg="CreateContainer within sandbox \"0a2be3777a932b23730a675bdce313de78551bd3c0ac082cda5d8a4dc49da70b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"78e28710ced217be9ee32d050b219655a21207e50dab2af02c837e97f64c2340\"" Jan 13 20:31:38.066116 containerd[1444]: time="2025-01-13T20:31:38.065948093Z" level=info msg="StartContainer for \"78e28710ced217be9ee32d050b219655a21207e50dab2af02c837e97f64c2340\"" Jan 13 20:31:38.094541 systemd[1]: Started cri-containerd-78e28710ced217be9ee32d050b219655a21207e50dab2af02c837e97f64c2340.scope - libcontainer container 78e28710ced217be9ee32d050b219655a21207e50dab2af02c837e97f64c2340. Jan 13 20:31:38.126237 containerd[1444]: time="2025-01-13T20:31:38.126110846Z" level=info msg="StartContainer for \"78e28710ced217be9ee32d050b219655a21207e50dab2af02c837e97f64c2340\" returns successfully" Jan 13 20:31:38.794364 kubelet[2613]: I0113 20:31:38.794238 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d7f47bd54-dc6xn" podStartSLOduration=14.403270749 podStartE2EDuration="18.794193919s" podCreationTimestamp="2025-01-13 20:31:20 +0000 UTC" firstStartedPulling="2025-01-13 20:31:33.660954672 +0000 UTC m=+32.272407658" lastFinishedPulling="2025-01-13 20:31:38.051877882 +0000 UTC m=+36.663330828" observedRunningTime="2025-01-13 20:31:38.793890338 +0000 UTC m=+37.405343364" watchObservedRunningTime="2025-01-13 20:31:38.794193919 +0000 UTC m=+37.405646905" Jan 13 20:31:39.155529 containerd[1444]: time="2025-01-13T20:31:39.155397760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:39.156540 containerd[1444]: time="2025-01-13T20:31:39.156349384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 13 20:31:39.157268 containerd[1444]: time="2025-01-13T20:31:39.157224003Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:39.160455 containerd[1444]: time="2025-01-13T20:31:39.159511876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:31:39.160579 containerd[1444]: time="2025-01-13T20:31:39.160414857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.107641353s" Jan 13 20:31:39.160620 containerd[1444]: time="2025-01-13T20:31:39.160583068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 13 20:31:39.165821 containerd[1444]: time="2025-01-13T20:31:39.165783457Z" level=info msg="CreateContainer within sandbox \"28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:31:39.179250 containerd[1444]: time="2025-01-13T20:31:39.179192437Z" level=info msg="CreateContainer within sandbox \"28001883f98c50677f160b90d34835a225ad00f6140ea7d5afad12e26194dcaa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2bfb75c874787a4fe2e43216b11dc3554fd1e695eeb3ca7fd7eee78842f27c3b\"" Jan 13 20:31:39.179731 containerd[1444]: time="2025-01-13T20:31:39.179687871Z" level=info msg="StartContainer for \"2bfb75c874787a4fe2e43216b11dc3554fd1e695eeb3ca7fd7eee78842f27c3b\"" Jan 13 20:31:39.214771 systemd[1]: Started cri-containerd-2bfb75c874787a4fe2e43216b11dc3554fd1e695eeb3ca7fd7eee78842f27c3b.scope - libcontainer container 2bfb75c874787a4fe2e43216b11dc3554fd1e695eeb3ca7fd7eee78842f27c3b. Jan 13 20:31:39.244879 containerd[1444]: time="2025-01-13T20:31:39.244811403Z" level=info msg="StartContainer for \"2bfb75c874787a4fe2e43216b11dc3554fd1e695eeb3ca7fd7eee78842f27c3b\" returns successfully" Jan 13 20:31:39.550503 kubelet[2613]: I0113 20:31:39.550468 2613 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:31:39.552006 kubelet[2613]: I0113 20:31:39.551987 2613 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:31:39.795680 kubelet[2613]: I0113 20:31:39.795635 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:31:39.807588 kubelet[2613]: I0113 20:31:39.807436 2613 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-xmsjq" podStartSLOduration=13.267875512 podStartE2EDuration="18.80736469s" podCreationTimestamp="2025-01-13 20:31:21 +0000 UTC" firstStartedPulling="2025-01-13 20:31:33.621313345 +0000 UTC m=+32.232766331" lastFinishedPulling="2025-01-13 20:31:39.160802563 +0000 UTC m=+37.772255509" observedRunningTime="2025-01-13 20:31:39.806750008 +0000 UTC m=+38.418203034" watchObservedRunningTime="2025-01-13 20:31:39.80736469 +0000 UTC m=+38.418817676" Jan 13 20:31:39.838021 systemd[1]: run-containerd-runc-k8s.io-2bfb75c874787a4fe2e43216b11dc3554fd1e695eeb3ca7fd7eee78842f27c3b-runc.EkeKVk.mount: Deactivated successfully. Jan 13 20:31:41.238799 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:50586.service - OpenSSH per-connection server daemon (10.0.0.1:50586). Jan 13 20:31:41.300517 sshd[5606]: Accepted publickey for core from 10.0.0.1 port 50586 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:31:41.302222 sshd-session[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:41.306947 systemd-logind[1423]: New session 10 of user core. Jan 13 20:31:41.315607 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:31:41.517327 sshd[5610]: Connection closed by 10.0.0.1 port 50586 Jan 13 20:31:41.518243 sshd-session[5606]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:41.528015 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:50586.service: Deactivated successfully. Jan 13 20:31:41.531247 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:31:41.531891 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:31:41.543737 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:50592.service - OpenSSH per-connection server daemon (10.0.0.1:50592). Jan 13 20:31:41.545008 systemd-logind[1423]: Removed session 10. Jan 13 20:31:41.590987 sshd[5623]: Accepted publickey for core from 10.0.0.1 port 50592 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:31:41.592877 sshd-session[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:41.597483 systemd-logind[1423]: New session 11 of user core. Jan 13 20:31:41.609575 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:31:41.815322 sshd[5625]: Connection closed by 10.0.0.1 port 50592 Jan 13 20:31:41.816091 sshd-session[5623]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:41.826636 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:50592.service: Deactivated successfully. Jan 13 20:31:41.833822 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:31:41.837793 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:31:41.850775 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:50604.service - OpenSSH per-connection server daemon (10.0.0.1:50604). Jan 13 20:31:41.851501 systemd-logind[1423]: Removed session 11. Jan 13 20:31:41.893989 sshd[5644]: Accepted publickey for core from 10.0.0.1 port 50604 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:31:41.895772 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:41.899833 systemd-logind[1423]: New session 12 of user core. Jan 13 20:31:41.909571 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:31:42.091097 sshd[5646]: Connection closed by 10.0.0.1 port 50604 Jan 13 20:31:42.092897 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:42.096348 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:50604.service: Deactivated successfully. Jan 13 20:31:42.098157 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:31:42.098818 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:31:42.099811 systemd-logind[1423]: Removed session 12. Jan 13 20:31:42.370556 kubelet[2613]: I0113 20:31:42.369825 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:31:42.370919 kubelet[2613]: E0113 20:31:42.370642 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:42.463340 systemd[1]: run-containerd-runc-k8s.io-2163edc8d0da86d17eb01faaeb1e5cd65981dc7910f2e96263c31471df870937-runc.C6btsI.mount: Deactivated successfully. Jan 13 20:31:42.514179 kubelet[2613]: I0113 20:31:42.514139 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:31:42.514922 kubelet[2613]: E0113 20:31:42.514897 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:42.803166 kubelet[2613]: E0113 20:31:42.803128 2613 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:31:42.841426 kernel: bpftool[5747]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:31:43.025757 systemd-networkd[1368]: vxlan.calico: Link UP Jan 13 20:31:43.025767 systemd-networkd[1368]: vxlan.calico: Gained carrier Jan 13 20:31:44.274799 kubelet[2613]: I0113 20:31:44.274740 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:31:44.300543 systemd-networkd[1368]: vxlan.calico: Gained IPv6LL Jan 13 20:31:47.107311 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:36710.service - OpenSSH per-connection server daemon (10.0.0.1:36710). Jan 13 20:31:47.163957 sshd[5922]: Accepted publickey for core from 10.0.0.1 port 36710 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:31:47.165616 sshd-session[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:47.170490 systemd-logind[1423]: New session 13 of user core. Jan 13 20:31:47.180529 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:31:47.379639 sshd[5924]: Connection closed by 10.0.0.1 port 36710 Jan 13 20:31:47.380662 sshd-session[5922]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:47.389204 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:36710.service: Deactivated successfully. Jan 13 20:31:47.392530 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:31:47.395832 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:31:47.404914 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:36726.service - OpenSSH per-connection server daemon (10.0.0.1:36726). Jan 13 20:31:47.405766 systemd-logind[1423]: Removed session 13. Jan 13 20:31:47.447320 sshd[5936]: Accepted publickey for core from 10.0.0.1 port 36726 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:31:47.447877 sshd-session[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:47.451765 systemd-logind[1423]: New session 14 of user core. Jan 13 20:31:47.461568 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:31:47.684453 sshd[5938]: Connection closed by 10.0.0.1 port 36726 Jan 13 20:31:47.684749 sshd-session[5936]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:47.698533 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:36726.service: Deactivated successfully. Jan 13 20:31:47.700359 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:31:47.701058 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:31:47.703227 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:36730.service - OpenSSH per-connection server daemon (10.0.0.1:36730). Jan 13 20:31:47.704567 systemd-logind[1423]: Removed session 14. Jan 13 20:31:47.753399 sshd[5949]: Accepted publickey for core from 10.0.0.1 port 36730 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:31:47.754691 sshd-session[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:47.759076 systemd-logind[1423]: New session 15 of user core. Jan 13 20:31:47.769557 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:31:49.149956 sshd[5951]: Connection closed by 10.0.0.1 port 36730 Jan 13 20:31:49.150795 sshd-session[5949]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:49.158283 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:36730.service: Deactivated successfully. Jan 13 20:31:49.161976 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:31:49.165016 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:31:49.176599 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:36736.service - OpenSSH per-connection server daemon (10.0.0.1:36736). Jan 13 20:31:49.178535 systemd-logind[1423]: Removed session 15. Jan 13 20:31:49.223498 sshd[5978]: Accepted publickey for core from 10.0.0.1 port 36736 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:31:49.224883 sshd-session[5978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:49.229155 systemd-logind[1423]: New session 16 of user core. Jan 13 20:31:49.243532 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:31:49.577088 sshd[5981]: Connection closed by 10.0.0.1 port 36736 Jan 13 20:31:49.579316 sshd-session[5978]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:49.588238 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:36736.service: Deactivated successfully. Jan 13 20:31:49.591176 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:31:49.592979 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:31:49.604782 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:36740.service - OpenSSH per-connection server daemon (10.0.0.1:36740). Jan 13 20:31:49.609450 systemd-logind[1423]: Removed session 16. Jan 13 20:31:49.645834 sshd[5991]: Accepted publickey for core from 10.0.0.1 port 36740 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:31:49.647134 sshd-session[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:49.651280 systemd-logind[1423]: New session 17 of user core. Jan 13 20:31:49.660582 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:31:49.806846 sshd[5993]: Connection closed by 10.0.0.1 port 36740 Jan 13 20:31:49.807259 sshd-session[5991]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:49.810891 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:36740.service: Deactivated successfully. Jan 13 20:31:49.812775 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:31:49.814060 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:31:49.815119 systemd-logind[1423]: Removed session 17. Jan 13 20:31:54.822226 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:39744.service - OpenSSH per-connection server daemon (10.0.0.1:39744). Jan 13 20:31:54.864180 sshd[6018]: Accepted publickey for core from 10.0.0.1 port 39744 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:31:54.865837 sshd-session[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:31:54.869777 systemd-logind[1423]: New session 18 of user core. Jan 13 20:31:54.876587 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:31:54.997831 sshd[6020]: Connection closed by 10.0.0.1 port 39744 Jan 13 20:31:54.998679 sshd-session[6018]: pam_unix(sshd:session): session closed for user core Jan 13 20:31:55.002101 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:39744.service: Deactivated successfully. Jan 13 20:31:55.004930 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:31:55.006032 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:31:55.007220 systemd-logind[1423]: Removed session 18. Jan 13 20:31:56.273000 kubelet[2613]: I0113 20:31:56.272885 2613 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:32:00.021656 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:39758.service - OpenSSH per-connection server daemon (10.0.0.1:39758). Jan 13 20:32:00.060318 sshd[6035]: Accepted publickey for core from 10.0.0.1 port 39758 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:32:00.061718 sshd-session[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:32:00.066001 systemd-logind[1423]: New session 19 of user core. Jan 13 20:32:00.079571 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:32:00.210348 sshd[6037]: Connection closed by 10.0.0.1 port 39758 Jan 13 20:32:00.211078 sshd-session[6035]: pam_unix(sshd:session): session closed for user core Jan 13 20:32:00.213680 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:39758.service: Deactivated successfully. Jan 13 20:32:00.215682 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:32:00.217370 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:32:00.219825 systemd-logind[1423]: Removed session 19. Jan 13 20:32:01.460915 containerd[1444]: time="2025-01-13T20:32:01.460870599Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\"" Jan 13 20:32:01.461279 containerd[1444]: time="2025-01-13T20:32:01.460977483Z" level=info msg="TearDown network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" successfully" Jan 13 20:32:01.461279 containerd[1444]: time="2025-01-13T20:32:01.460988163Z" level=info msg="StopPodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" returns successfully" Jan 13 20:32:01.461712 containerd[1444]: time="2025-01-13T20:32:01.461689350Z" level=info msg="RemovePodSandbox for \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\"" Jan 13 20:32:01.464554 containerd[1444]: time="2025-01-13T20:32:01.464509496Z" level=info msg="Forcibly stopping sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\"" Jan 13 20:32:01.464620 containerd[1444]: time="2025-01-13T20:32:01.464603340Z" level=info msg="TearDown network for sandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" successfully" Jan 13 20:32:01.478097 containerd[1444]: time="2025-01-13T20:32:01.478055007Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.478152 containerd[1444]: time="2025-01-13T20:32:01.478127450Z" level=info msg="RemovePodSandbox \"9bf293de6a34e826906beec0759f7be65dfc1942cc5a44a7724c39b04e6c7eea\" returns successfully" Jan 13 20:32:01.478819 containerd[1444]: time="2025-01-13T20:32:01.478630469Z" level=info msg="StopPodSandbox for \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\"" Jan 13 20:32:01.478819 containerd[1444]: time="2025-01-13T20:32:01.478740713Z" level=info msg="TearDown network for sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" successfully" Jan 13 20:32:01.478819 containerd[1444]: time="2025-01-13T20:32:01.478752673Z" level=info msg="StopPodSandbox for \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" returns successfully" Jan 13 20:32:01.479061 containerd[1444]: time="2025-01-13T20:32:01.479024763Z" level=info msg="RemovePodSandbox for \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\"" Jan 13 20:32:01.479061 containerd[1444]: time="2025-01-13T20:32:01.479048884Z" level=info msg="Forcibly stopping sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\"" Jan 13 20:32:01.479124 containerd[1444]: time="2025-01-13T20:32:01.479113567Z" level=info msg="TearDown network for sandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" successfully" Jan 13 20:32:01.481595 containerd[1444]: time="2025-01-13T20:32:01.481558299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.481670 containerd[1444]: time="2025-01-13T20:32:01.481607101Z" level=info msg="RemovePodSandbox \"af079c8844e9ba42a7895dd7b09acda3aa5a17df79792eb2d90dfc010cde87e0\" returns successfully" Jan 13 20:32:01.483214 containerd[1444]: time="2025-01-13T20:32:01.483179640Z" level=info msg="StopPodSandbox for \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\"" Jan 13 20:32:01.483312 containerd[1444]: time="2025-01-13T20:32:01.483286284Z" level=info msg="TearDown network for sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\" successfully" Jan 13 20:32:01.483312 containerd[1444]: time="2025-01-13T20:32:01.483297485Z" level=info msg="StopPodSandbox for \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\" returns successfully" Jan 13 20:32:01.485411 containerd[1444]: time="2025-01-13T20:32:01.484278282Z" level=info msg="RemovePodSandbox for \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\"" Jan 13 20:32:01.485411 containerd[1444]: time="2025-01-13T20:32:01.484307323Z" level=info msg="Forcibly stopping sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\"" Jan 13 20:32:01.485411 containerd[1444]: time="2025-01-13T20:32:01.484367205Z" level=info msg="TearDown network for sandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\" successfully" Jan 13 20:32:01.486722 containerd[1444]: time="2025-01-13T20:32:01.486690172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.486849 containerd[1444]: time="2025-01-13T20:32:01.486832458Z" level=info msg="RemovePodSandbox \"27de9c15f7bd298640a0927a7e8398640a35679c21fd7c081aeedb6659c2f5f4\" returns successfully" Jan 13 20:32:01.487231 containerd[1444]: time="2025-01-13T20:32:01.487188151Z" level=info msg="StopPodSandbox for \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\"" Jan 13 20:32:01.487376 containerd[1444]: time="2025-01-13T20:32:01.487354637Z" level=info msg="TearDown network for sandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\" successfully" Jan 13 20:32:01.487427 containerd[1444]: time="2025-01-13T20:32:01.487373918Z" level=info msg="StopPodSandbox for \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\" returns successfully" Jan 13 20:32:01.487777 containerd[1444]: time="2025-01-13T20:32:01.487754453Z" level=info msg="RemovePodSandbox for \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\"" Jan 13 20:32:01.487815 containerd[1444]: time="2025-01-13T20:32:01.487782974Z" level=info msg="Forcibly stopping sandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\"" Jan 13 20:32:01.487877 containerd[1444]: time="2025-01-13T20:32:01.487861817Z" level=info msg="TearDown network for sandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\" successfully" Jan 13 20:32:01.490291 containerd[1444]: time="2025-01-13T20:32:01.490244026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.490349 containerd[1444]: time="2025-01-13T20:32:01.490304709Z" level=info msg="RemovePodSandbox \"575b4a8be12a40638bcefaaf595bc8b74e07acf1f340748d4dd2eff55a09cdb3\" returns successfully" Jan 13 20:32:01.490675 containerd[1444]: time="2025-01-13T20:32:01.490634521Z" level=info msg="StopPodSandbox for \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\"" Jan 13 20:32:01.490888 containerd[1444]: time="2025-01-13T20:32:01.490867890Z" level=info msg="TearDown network for sandbox \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\" successfully" Jan 13 20:32:01.490888 containerd[1444]: time="2025-01-13T20:32:01.490886811Z" level=info msg="StopPodSandbox for \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\" returns successfully" Jan 13 20:32:01.493402 containerd[1444]: time="2025-01-13T20:32:01.491135860Z" level=info msg="RemovePodSandbox for \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\"" Jan 13 20:32:01.493402 containerd[1444]: time="2025-01-13T20:32:01.491164701Z" level=info msg="Forcibly stopping sandbox \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\"" Jan 13 20:32:01.493402 containerd[1444]: time="2025-01-13T20:32:01.491222383Z" level=info msg="TearDown network for sandbox \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\" successfully" Jan 13 20:32:01.493962 containerd[1444]: time="2025-01-13T20:32:01.493921885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.494029 containerd[1444]: time="2025-01-13T20:32:01.494010008Z" level=info msg="RemovePodSandbox \"365ee13f1cce16bec376662926685e196ec955c3fcb90e12ff93318f346c4c0e\" returns successfully" Jan 13 20:32:01.495822 containerd[1444]: time="2025-01-13T20:32:01.495515185Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\"" Jan 13 20:32:01.495919 containerd[1444]: time="2025-01-13T20:32:01.495899880Z" level=info msg="TearDown network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" successfully" Jan 13 20:32:01.495919 containerd[1444]: time="2025-01-13T20:32:01.495916600Z" level=info msg="StopPodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" returns successfully" Jan 13 20:32:01.496291 containerd[1444]: time="2025-01-13T20:32:01.496260813Z" level=info msg="RemovePodSandbox for \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\"" Jan 13 20:32:01.496291 containerd[1444]: time="2025-01-13T20:32:01.496289254Z" level=info msg="Forcibly stopping sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\"" Jan 13 20:32:01.496369 containerd[1444]: time="2025-01-13T20:32:01.496355297Z" level=info msg="TearDown network for sandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" successfully" Jan 13 20:32:01.498552 containerd[1444]: time="2025-01-13T20:32:01.498508618Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.498607 containerd[1444]: time="2025-01-13T20:32:01.498556860Z" level=info msg="RemovePodSandbox \"0bbc77ac27dcd4cc28a414f68140a9fb25c76c844fb1e929add47ed0782dae14\" returns successfully" Jan 13 20:32:01.498880 containerd[1444]: time="2025-01-13T20:32:01.498841551Z" level=info msg="StopPodSandbox for \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\"" Jan 13 20:32:01.498956 containerd[1444]: time="2025-01-13T20:32:01.498933954Z" level=info msg="TearDown network for sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" successfully" Jan 13 20:32:01.498956 containerd[1444]: time="2025-01-13T20:32:01.498949235Z" level=info msg="StopPodSandbox for \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" returns successfully" Jan 13 20:32:01.499415 containerd[1444]: time="2025-01-13T20:32:01.499358810Z" level=info msg="RemovePodSandbox for \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\"" Jan 13 20:32:01.499446 containerd[1444]: time="2025-01-13T20:32:01.499421772Z" level=info msg="Forcibly stopping sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\"" Jan 13 20:32:01.499490 containerd[1444]: time="2025-01-13T20:32:01.499477575Z" level=info msg="TearDown network for sandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" successfully" Jan 13 20:32:01.501639 containerd[1444]: time="2025-01-13T20:32:01.501604935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.501693 containerd[1444]: time="2025-01-13T20:32:01.501654057Z" level=info msg="RemovePodSandbox \"b11f3c777ccfbe586043aaf5348e87c623e386b15e310d4cb356e89a2a933f91\" returns successfully" Jan 13 20:32:01.502107 containerd[1444]: time="2025-01-13T20:32:01.501955708Z" level=info msg="StopPodSandbox for \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\"" Jan 13 20:32:01.502107 containerd[1444]: time="2025-01-13T20:32:01.502039191Z" level=info msg="TearDown network for sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\" successfully" Jan 13 20:32:01.502107 containerd[1444]: time="2025-01-13T20:32:01.502048231Z" level=info msg="StopPodSandbox for \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\" returns successfully" Jan 13 20:32:01.502364 containerd[1444]: time="2025-01-13T20:32:01.502339682Z" level=info msg="RemovePodSandbox for \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\"" Jan 13 20:32:01.502440 containerd[1444]: time="2025-01-13T20:32:01.502367684Z" level=info msg="Forcibly stopping sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\"" Jan 13 20:32:01.502467 containerd[1444]: time="2025-01-13T20:32:01.502458047Z" level=info msg="TearDown network for sandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\" successfully" Jan 13 20:32:01.504585 containerd[1444]: time="2025-01-13T20:32:01.504553766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.504634 containerd[1444]: time="2025-01-13T20:32:01.504606648Z" level=info msg="RemovePodSandbox \"afd20911543d52cf212f46eca770180b349ff140db18ac6c7c06fdd47763fc30\" returns successfully" Jan 13 20:32:01.505165 containerd[1444]: time="2025-01-13T20:32:01.505138628Z" level=info msg="StopPodSandbox for \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\"" Jan 13 20:32:01.505242 containerd[1444]: time="2025-01-13T20:32:01.505226391Z" level=info msg="TearDown network for sandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\" successfully" Jan 13 20:32:01.505242 containerd[1444]: time="2025-01-13T20:32:01.505239992Z" level=info msg="StopPodSandbox for \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\" returns successfully" Jan 13 20:32:01.506602 containerd[1444]: time="2025-01-13T20:32:01.505532003Z" level=info msg="RemovePodSandbox for \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\"" Jan 13 20:32:01.506602 containerd[1444]: time="2025-01-13T20:32:01.505568204Z" level=info msg="Forcibly stopping sandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\"" Jan 13 20:32:01.506602 containerd[1444]: time="2025-01-13T20:32:01.505632167Z" level=info msg="TearDown network for sandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\" successfully" Jan 13 20:32:01.507905 containerd[1444]: time="2025-01-13T20:32:01.507869651Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.508045 containerd[1444]: time="2025-01-13T20:32:01.508026777Z" level=info msg="RemovePodSandbox \"0787b805bad5165cb6759d604789088d8bf816406df376b67f799c2bc9e37445\" returns successfully" Jan 13 20:32:01.508593 containerd[1444]: time="2025-01-13T20:32:01.508567837Z" level=info msg="StopPodSandbox for \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\"" Jan 13 20:32:01.508673 containerd[1444]: time="2025-01-13T20:32:01.508657001Z" level=info msg="TearDown network for sandbox \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\" successfully" Jan 13 20:32:01.508673 containerd[1444]: time="2025-01-13T20:32:01.508671241Z" level=info msg="StopPodSandbox for \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\" returns successfully" Jan 13 20:32:01.508936 containerd[1444]: time="2025-01-13T20:32:01.508914010Z" level=info msg="RemovePodSandbox for \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\"" Jan 13 20:32:01.508964 containerd[1444]: time="2025-01-13T20:32:01.508941451Z" level=info msg="Forcibly stopping sandbox \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\"" Jan 13 20:32:01.509019 containerd[1444]: time="2025-01-13T20:32:01.509006574Z" level=info msg="TearDown network for sandbox \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\" successfully" Jan 13 20:32:01.511308 containerd[1444]: time="2025-01-13T20:32:01.511263699Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.511348 containerd[1444]: time="2025-01-13T20:32:01.511322541Z" level=info msg="RemovePodSandbox \"810d91ff56b8b6262b192e3d70fc1d3b46abb4599d132f225d7540d54de6617c\" returns successfully" Jan 13 20:32:01.511777 containerd[1444]: time="2025-01-13T20:32:01.511620712Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\"" Jan 13 20:32:01.511777 containerd[1444]: time="2025-01-13T20:32:01.511707356Z" level=info msg="TearDown network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" successfully" Jan 13 20:32:01.511777 containerd[1444]: time="2025-01-13T20:32:01.511717036Z" level=info msg="StopPodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" returns successfully" Jan 13 20:32:01.512026 containerd[1444]: time="2025-01-13T20:32:01.512001447Z" level=info msg="RemovePodSandbox for \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\"" Jan 13 20:32:01.512059 containerd[1444]: time="2025-01-13T20:32:01.512032568Z" level=info msg="Forcibly stopping sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\"" Jan 13 20:32:01.512108 containerd[1444]: time="2025-01-13T20:32:01.512094850Z" level=info msg="TearDown network for sandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" successfully" Jan 13 20:32:01.514219 containerd[1444]: time="2025-01-13T20:32:01.514180249Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.514280 containerd[1444]: time="2025-01-13T20:32:01.514234171Z" level=info msg="RemovePodSandbox \"5dd44de8f439e0220f74cd84a7efcf49f5b3164745763714e8d505828ff31d08\" returns successfully" Jan 13 20:32:01.514582 containerd[1444]: time="2025-01-13T20:32:01.514551783Z" level=info msg="StopPodSandbox for \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\"" Jan 13 20:32:01.514643 containerd[1444]: time="2025-01-13T20:32:01.514633986Z" level=info msg="TearDown network for sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" successfully" Jan 13 20:32:01.514675 containerd[1444]: time="2025-01-13T20:32:01.514644306Z" level=info msg="StopPodSandbox for \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" returns successfully" Jan 13 20:32:01.515951 containerd[1444]: time="2025-01-13T20:32:01.514909516Z" level=info msg="RemovePodSandbox for \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\"" Jan 13 20:32:01.515951 containerd[1444]: time="2025-01-13T20:32:01.514935797Z" level=info msg="Forcibly stopping sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\"" Jan 13 20:32:01.515951 containerd[1444]: time="2025-01-13T20:32:01.515003800Z" level=info msg="TearDown network for sandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" successfully" Jan 13 20:32:01.517213 containerd[1444]: time="2025-01-13T20:32:01.517182682Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.517338 containerd[1444]: time="2025-01-13T20:32:01.517320527Z" level=info msg="RemovePodSandbox \"8a000c0113baed12f8901ede56eb7dd18d359a5eed65cbdbb9cf3d1a91127720\" returns successfully" Jan 13 20:32:01.517709 containerd[1444]: time="2025-01-13T20:32:01.517669060Z" level=info msg="StopPodSandbox for \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\"" Jan 13 20:32:01.517789 containerd[1444]: time="2025-01-13T20:32:01.517771264Z" level=info msg="TearDown network for sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\" successfully" Jan 13 20:32:01.517815 containerd[1444]: time="2025-01-13T20:32:01.517787265Z" level=info msg="StopPodSandbox for \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\" returns successfully" Jan 13 20:32:01.518122 containerd[1444]: time="2025-01-13T20:32:01.518103197Z" level=info msg="RemovePodSandbox for \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\"" Jan 13 20:32:01.518156 containerd[1444]: time="2025-01-13T20:32:01.518127678Z" level=info msg="Forcibly stopping sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\"" Jan 13 20:32:01.518195 containerd[1444]: time="2025-01-13T20:32:01.518182520Z" level=info msg="TearDown network for sandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\" successfully" Jan 13 20:32:01.522384 containerd[1444]: time="2025-01-13T20:32:01.522339076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.522424 containerd[1444]: time="2025-01-13T20:32:01.522409919Z" level=info msg="RemovePodSandbox \"f9792a0fd6d130ae4ac9ddd5177d35aeec5844699f485e6b4952af488928f186\" returns successfully" Jan 13 20:32:01.523114 containerd[1444]: time="2025-01-13T20:32:01.523077464Z" level=info msg="StopPodSandbox for \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\"" Jan 13 20:32:01.523190 containerd[1444]: time="2025-01-13T20:32:01.523164948Z" level=info msg="TearDown network for sandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\" successfully" Jan 13 20:32:01.523190 containerd[1444]: time="2025-01-13T20:32:01.523179468Z" level=info msg="StopPodSandbox for \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\" returns successfully" Jan 13 20:32:01.523442 containerd[1444]: time="2025-01-13T20:32:01.523411437Z" level=info msg="RemovePodSandbox for \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\"" Jan 13 20:32:01.523474 containerd[1444]: time="2025-01-13T20:32:01.523441198Z" level=info msg="Forcibly stopping sandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\"" Jan 13 20:32:01.523525 containerd[1444]: time="2025-01-13T20:32:01.523509601Z" level=info msg="TearDown network for sandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\" successfully" Jan 13 20:32:01.525845 containerd[1444]: time="2025-01-13T20:32:01.525798887Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.525948 containerd[1444]: time="2025-01-13T20:32:01.525868730Z" level=info msg="RemovePodSandbox \"a89124c9bad91dceb97d8d394fa0156486a0d887ae475f510b062150245ec8b7\" returns successfully" Jan 13 20:32:01.526216 containerd[1444]: time="2025-01-13T20:32:01.526175261Z" level=info msg="StopPodSandbox for \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\"" Jan 13 20:32:01.526311 containerd[1444]: time="2025-01-13T20:32:01.526285465Z" level=info msg="TearDown network for sandbox \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\" successfully" Jan 13 20:32:01.526311 containerd[1444]: time="2025-01-13T20:32:01.526300226Z" level=info msg="StopPodSandbox for \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\" returns successfully" Jan 13 20:32:01.526559 containerd[1444]: time="2025-01-13T20:32:01.526531275Z" level=info msg="RemovePodSandbox for \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\"" Jan 13 20:32:01.526595 containerd[1444]: time="2025-01-13T20:32:01.526558356Z" level=info msg="Forcibly stopping sandbox \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\"" Jan 13 20:32:01.526631 containerd[1444]: time="2025-01-13T20:32:01.526616998Z" level=info msg="TearDown network for sandbox \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\" successfully" Jan 13 20:32:01.528832 containerd[1444]: time="2025-01-13T20:32:01.528793680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.528895 containerd[1444]: time="2025-01-13T20:32:01.528849482Z" level=info msg="RemovePodSandbox \"de48b700fdb26b66f48b472dae031378740f071593b9293f7c388046230ed93f\" returns successfully" Jan 13 20:32:01.529311 containerd[1444]: time="2025-01-13T20:32:01.529220016Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\"" Jan 13 20:32:01.529531 containerd[1444]: time="2025-01-13T20:32:01.529494586Z" level=info msg="TearDown network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" successfully" Jan 13 20:32:01.529794 containerd[1444]: time="2025-01-13T20:32:01.529649592Z" level=info msg="StopPodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" returns successfully" Jan 13 20:32:01.530001 containerd[1444]: time="2025-01-13T20:32:01.529972324Z" level=info msg="RemovePodSandbox for \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\"" Jan 13 20:32:01.530001 containerd[1444]: time="2025-01-13T20:32:01.529998165Z" level=info msg="Forcibly stopping sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\"" Jan 13 20:32:01.530182 containerd[1444]: time="2025-01-13T20:32:01.530076288Z" level=info msg="TearDown network for sandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" successfully" Jan 13 20:32:01.532268 containerd[1444]: time="2025-01-13T20:32:01.532229849Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.532330 containerd[1444]: time="2025-01-13T20:32:01.532289692Z" level=info msg="RemovePodSandbox \"5680a58aaf13663f5440a68e38b4efdae849726eb2a319e499ed3b7ae275da8e\" returns successfully" Jan 13 20:32:01.532701 containerd[1444]: time="2025-01-13T20:32:01.532667946Z" level=info msg="StopPodSandbox for \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\"" Jan 13 20:32:01.532776 containerd[1444]: time="2025-01-13T20:32:01.532759589Z" level=info msg="TearDown network for sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" successfully" Jan 13 20:32:01.532776 containerd[1444]: time="2025-01-13T20:32:01.532774070Z" level=info msg="StopPodSandbox for \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" returns successfully" Jan 13 20:32:01.533067 containerd[1444]: time="2025-01-13T20:32:01.533034720Z" level=info msg="RemovePodSandbox for \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\"" Jan 13 20:32:01.533097 containerd[1444]: time="2025-01-13T20:32:01.533067281Z" level=info msg="Forcibly stopping sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\"" Jan 13 20:32:01.533144 containerd[1444]: time="2025-01-13T20:32:01.533130643Z" level=info msg="TearDown network for sandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" successfully" Jan 13 20:32:01.535425 containerd[1444]: time="2025-01-13T20:32:01.535390249Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.535476 containerd[1444]: time="2025-01-13T20:32:01.535441931Z" level=info msg="RemovePodSandbox \"9b482a5c4bb12363b1a6a17d8dcbff51a3a7dec0fa667ad310df7ab3452c95d2\" returns successfully" Jan 13 20:32:01.535761 containerd[1444]: time="2025-01-13T20:32:01.535726341Z" level=info msg="StopPodSandbox for \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\"" Jan 13 20:32:01.535834 containerd[1444]: time="2025-01-13T20:32:01.535819425Z" level=info msg="TearDown network for sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\" successfully" Jan 13 20:32:01.535864 containerd[1444]: time="2025-01-13T20:32:01.535834465Z" level=info msg="StopPodSandbox for \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\" returns successfully" Jan 13 20:32:01.536116 containerd[1444]: time="2025-01-13T20:32:01.536083515Z" level=info msg="RemovePodSandbox for \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\"" Jan 13 20:32:01.536149 containerd[1444]: time="2025-01-13T20:32:01.536115036Z" level=info msg="Forcibly stopping sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\"" Jan 13 20:32:01.536184 containerd[1444]: time="2025-01-13T20:32:01.536170118Z" level=info msg="TearDown network for sandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\" successfully" Jan 13 20:32:01.538301 containerd[1444]: time="2025-01-13T20:32:01.538262077Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.538330 containerd[1444]: time="2025-01-13T20:32:01.538319919Z" level=info msg="RemovePodSandbox \"0fda8a9570d324edb0433b8bda9eb881e0737b5a733ae613f6dc4319afa45b89\" returns successfully" Jan 13 20:32:01.538797 containerd[1444]: time="2025-01-13T20:32:01.538739055Z" level=info msg="StopPodSandbox for \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\"" Jan 13 20:32:01.538911 containerd[1444]: time="2025-01-13T20:32:01.538885300Z" level=info msg="TearDown network for sandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\" successfully" Jan 13 20:32:01.538911 containerd[1444]: time="2025-01-13T20:32:01.538904541Z" level=info msg="StopPodSandbox for \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\" returns successfully" Jan 13 20:32:01.539250 containerd[1444]: time="2025-01-13T20:32:01.539199592Z" level=info msg="RemovePodSandbox for \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\"" Jan 13 20:32:01.539250 containerd[1444]: time="2025-01-13T20:32:01.539224553Z" level=info msg="Forcibly stopping sandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\"" Jan 13 20:32:01.539314 containerd[1444]: time="2025-01-13T20:32:01.539287195Z" level=info msg="TearDown network for sandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\" successfully" Jan 13 20:32:01.541754 containerd[1444]: time="2025-01-13T20:32:01.541715327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.541793 containerd[1444]: time="2025-01-13T20:32:01.541770169Z" level=info msg="RemovePodSandbox \"fe771e0e430ca5efb32c7129e6d5d300482d8c0ba6878a05a2da196bd631956e\" returns successfully" Jan 13 20:32:01.542327 containerd[1444]: time="2025-01-13T20:32:01.542171424Z" level=info msg="StopPodSandbox for \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\"" Jan 13 20:32:01.542327 containerd[1444]: time="2025-01-13T20:32:01.542255667Z" level=info msg="TearDown network for sandbox \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\" successfully" Jan 13 20:32:01.542327 containerd[1444]: time="2025-01-13T20:32:01.542265988Z" level=info msg="StopPodSandbox for \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\" returns successfully" Jan 13 20:32:01.542655 containerd[1444]: time="2025-01-13T20:32:01.542605881Z" level=info msg="RemovePodSandbox for \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\"" Jan 13 20:32:01.542655 containerd[1444]: time="2025-01-13T20:32:01.542633362Z" level=info msg="Forcibly stopping sandbox \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\"" Jan 13 20:32:01.542733 containerd[1444]: time="2025-01-13T20:32:01.542702644Z" level=info msg="TearDown network for sandbox \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\" successfully" Jan 13 20:32:01.544784 containerd[1444]: time="2025-01-13T20:32:01.544753722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.544841 containerd[1444]: time="2025-01-13T20:32:01.544807324Z" level=info msg="RemovePodSandbox \"c968d71f9b881f185571f076a4d7f01ca6a8befbc601d352cd2cc06a103a16a5\" returns successfully" Jan 13 20:32:01.545124 containerd[1444]: time="2025-01-13T20:32:01.545096695Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\"" Jan 13 20:32:01.545198 containerd[1444]: time="2025-01-13T20:32:01.545181338Z" level=info msg="TearDown network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" successfully" Jan 13 20:32:01.545198 containerd[1444]: time="2025-01-13T20:32:01.545196498Z" level=info msg="StopPodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" returns successfully" Jan 13 20:32:01.545486 containerd[1444]: time="2025-01-13T20:32:01.545462748Z" level=info msg="RemovePodSandbox for \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\"" Jan 13 20:32:01.545532 containerd[1444]: time="2025-01-13T20:32:01.545490349Z" level=info msg="Forcibly stopping sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\"" Jan 13 20:32:01.545646 containerd[1444]: time="2025-01-13T20:32:01.545552072Z" level=info msg="TearDown network for sandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" successfully" Jan 13 20:32:01.547643 containerd[1444]: time="2025-01-13T20:32:01.547606469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.547685 containerd[1444]: time="2025-01-13T20:32:01.547659511Z" level=info msg="RemovePodSandbox \"201fcea84fc72cf3e7297141969eb5a9f1e74d6c649286211a54aa6b484e8688\" returns successfully" Jan 13 20:32:01.548010 containerd[1444]: time="2025-01-13T20:32:01.547973083Z" level=info msg="StopPodSandbox for \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\"" Jan 13 20:32:01.548078 containerd[1444]: time="2025-01-13T20:32:01.548059166Z" level=info msg="TearDown network for sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" successfully" Jan 13 20:32:01.548078 containerd[1444]: time="2025-01-13T20:32:01.548073727Z" level=info msg="StopPodSandbox for \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" returns successfully" Jan 13 20:32:01.548423 containerd[1444]: time="2025-01-13T20:32:01.548401499Z" level=info msg="RemovePodSandbox for \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\"" Jan 13 20:32:01.548487 containerd[1444]: time="2025-01-13T20:32:01.548425220Z" level=info msg="Forcibly stopping sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\"" Jan 13 20:32:01.548514 containerd[1444]: time="2025-01-13T20:32:01.548487142Z" level=info msg="TearDown network for sandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" successfully" Jan 13 20:32:01.550731 containerd[1444]: time="2025-01-13T20:32:01.550700946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.550783 containerd[1444]: time="2025-01-13T20:32:01.550748988Z" level=info msg="RemovePodSandbox \"79d07be7f8cae051478ba5911a78127eac4a12f6f0f9993c83d46bdfff268df2\" returns successfully" Jan 13 20:32:01.551249 containerd[1444]: time="2025-01-13T20:32:01.551154923Z" level=info msg="StopPodSandbox for \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\"" Jan 13 20:32:01.551412 containerd[1444]: time="2025-01-13T20:32:01.551268967Z" level=info msg="TearDown network for sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\" successfully" Jan 13 20:32:01.551412 containerd[1444]: time="2025-01-13T20:32:01.551279968Z" level=info msg="StopPodSandbox for \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\" returns successfully" Jan 13 20:32:01.551764 containerd[1444]: time="2025-01-13T20:32:01.551732705Z" level=info msg="RemovePodSandbox for \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\"" Jan 13 20:32:01.551764 containerd[1444]: time="2025-01-13T20:32:01.551763266Z" level=info msg="Forcibly stopping sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\"" Jan 13 20:32:01.551867 containerd[1444]: time="2025-01-13T20:32:01.551819948Z" level=info msg="TearDown network for sandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\" successfully" Jan 13 20:32:01.553872 containerd[1444]: time="2025-01-13T20:32:01.553820263Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.553925 containerd[1444]: time="2025-01-13T20:32:01.553882306Z" level=info msg="RemovePodSandbox \"3076b42b4e19f8db260de0c21042ebcebf4664b871c271bd52a96266d79a6cd0\" returns successfully" Jan 13 20:32:01.554257 containerd[1444]: time="2025-01-13T20:32:01.554232319Z" level=info msg="StopPodSandbox for \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\"" Jan 13 20:32:01.554333 containerd[1444]: time="2025-01-13T20:32:01.554317722Z" level=info msg="TearDown network for sandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\" successfully" Jan 13 20:32:01.554375 containerd[1444]: time="2025-01-13T20:32:01.554331883Z" level=info msg="StopPodSandbox for \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\" returns successfully" Jan 13 20:32:01.554694 containerd[1444]: time="2025-01-13T20:32:01.554666655Z" level=info msg="RemovePodSandbox for \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\"" Jan 13 20:32:01.554694 containerd[1444]: time="2025-01-13T20:32:01.554695336Z" level=info msg="Forcibly stopping sandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\"" Jan 13 20:32:01.554846 containerd[1444]: time="2025-01-13T20:32:01.554760779Z" level=info msg="TearDown network for sandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\" successfully" Jan 13 20:32:01.557052 containerd[1444]: time="2025-01-13T20:32:01.557013104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.557097 containerd[1444]: time="2025-01-13T20:32:01.557071106Z" level=info msg="RemovePodSandbox \"43d7a10f7cc397bb52e62eea3963aba5edc27eacf88145ec873dde6d884492ef\" returns successfully" Jan 13 20:32:01.557603 containerd[1444]: time="2025-01-13T20:32:01.557431120Z" level=info msg="StopPodSandbox for \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\"" Jan 13 20:32:01.557603 containerd[1444]: time="2025-01-13T20:32:01.557527643Z" level=info msg="TearDown network for sandbox \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\" successfully" Jan 13 20:32:01.557603 containerd[1444]: time="2025-01-13T20:32:01.557539444Z" level=info msg="StopPodSandbox for \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\" returns successfully" Jan 13 20:32:01.557805 containerd[1444]: time="2025-01-13T20:32:01.557766932Z" level=info msg="RemovePodSandbox for \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\"" Jan 13 20:32:01.557805 containerd[1444]: time="2025-01-13T20:32:01.557790253Z" level=info msg="Forcibly stopping sandbox \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\"" Jan 13 20:32:01.557896 containerd[1444]: time="2025-01-13T20:32:01.557872816Z" level=info msg="TearDown network for sandbox \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\" successfully" Jan 13 20:32:01.560090 containerd[1444]: time="2025-01-13T20:32:01.560059659Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.560161 containerd[1444]: time="2025-01-13T20:32:01.560112421Z" level=info msg="RemovePodSandbox \"d00b8d109f27bf07dee86bd651f682fede8d2e47ca9a3c0bd8118a05cf04064c\" returns successfully" Jan 13 20:32:01.560491 containerd[1444]: time="2025-01-13T20:32:01.560463154Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\"" Jan 13 20:32:01.560592 containerd[1444]: time="2025-01-13T20:32:01.560549997Z" level=info msg="TearDown network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" successfully" Jan 13 20:32:01.560592 containerd[1444]: time="2025-01-13T20:32:01.560560038Z" level=info msg="StopPodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" returns successfully" Jan 13 20:32:01.560838 containerd[1444]: time="2025-01-13T20:32:01.560796486Z" level=info msg="RemovePodSandbox for \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\"" Jan 13 20:32:01.560838 containerd[1444]: time="2025-01-13T20:32:01.560820567Z" level=info msg="Forcibly stopping sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\"" Jan 13 20:32:01.560906 containerd[1444]: time="2025-01-13T20:32:01.560887090Z" level=info msg="TearDown network for sandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" successfully" Jan 13 20:32:01.563066 containerd[1444]: time="2025-01-13T20:32:01.563035971Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.563132 containerd[1444]: time="2025-01-13T20:32:01.563085933Z" level=info msg="RemovePodSandbox \"45ff471a86d5cc9762201b754fe01abb9de86e9ef5156039b3c1cf8490e93531\" returns successfully" Jan 13 20:32:01.563476 containerd[1444]: time="2025-01-13T20:32:01.563456227Z" level=info msg="StopPodSandbox for \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\"" Jan 13 20:32:01.563549 containerd[1444]: time="2025-01-13T20:32:01.563534790Z" level=info msg="TearDown network for sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" successfully" Jan 13 20:32:01.563594 containerd[1444]: time="2025-01-13T20:32:01.563547430Z" level=info msg="StopPodSandbox for \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" returns successfully" Jan 13 20:32:01.563815 containerd[1444]: time="2025-01-13T20:32:01.563784679Z" level=info msg="RemovePodSandbox for \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\"" Jan 13 20:32:01.563861 containerd[1444]: time="2025-01-13T20:32:01.563822641Z" level=info msg="Forcibly stopping sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\"" Jan 13 20:32:01.563923 containerd[1444]: time="2025-01-13T20:32:01.563905444Z" level=info msg="TearDown network for sandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" successfully" Jan 13 20:32:01.566127 containerd[1444]: time="2025-01-13T20:32:01.566085766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.566170 containerd[1444]: time="2025-01-13T20:32:01.566142088Z" level=info msg="RemovePodSandbox \"2eae4c5e0fa6b49af593d6decbd84d7c37cf0f6b8dfee9611152f6820f311a9b\" returns successfully" Jan 13 20:32:01.566480 containerd[1444]: time="2025-01-13T20:32:01.566452620Z" level=info msg="StopPodSandbox for \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\"" Jan 13 20:32:01.566740 containerd[1444]: time="2025-01-13T20:32:01.566644547Z" level=info msg="TearDown network for sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\" successfully" Jan 13 20:32:01.566740 containerd[1444]: time="2025-01-13T20:32:01.566662748Z" level=info msg="StopPodSandbox for \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\" returns successfully" Jan 13 20:32:01.566943 containerd[1444]: time="2025-01-13T20:32:01.566913077Z" level=info msg="RemovePodSandbox for \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\"" Jan 13 20:32:01.566990 containerd[1444]: time="2025-01-13T20:32:01.566943678Z" level=info msg="Forcibly stopping sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\"" Jan 13 20:32:01.567032 containerd[1444]: time="2025-01-13T20:32:01.567002560Z" level=info msg="TearDown network for sandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\" successfully" Jan 13 20:32:01.569080 containerd[1444]: time="2025-01-13T20:32:01.569050638Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.569128 containerd[1444]: time="2025-01-13T20:32:01.569103600Z" level=info msg="RemovePodSandbox \"d33d1e8509942d7744d24f39788216376c83854abc1c2410cbfc6b2c9a91ced4\" returns successfully" Jan 13 20:32:01.569416 containerd[1444]: time="2025-01-13T20:32:01.569390610Z" level=info msg="StopPodSandbox for \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\"" Jan 13 20:32:01.569494 containerd[1444]: time="2025-01-13T20:32:01.569480294Z" level=info msg="TearDown network for sandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\" successfully" Jan 13 20:32:01.569537 containerd[1444]: time="2025-01-13T20:32:01.569494574Z" level=info msg="StopPodSandbox for \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\" returns successfully" Jan 13 20:32:01.569742 containerd[1444]: time="2025-01-13T20:32:01.569717343Z" level=info msg="RemovePodSandbox for \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\"" Jan 13 20:32:01.569771 containerd[1444]: time="2025-01-13T20:32:01.569748824Z" level=info msg="Forcibly stopping sandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\"" Jan 13 20:32:01.569845 containerd[1444]: time="2025-01-13T20:32:01.569830987Z" level=info msg="TearDown network for sandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\" successfully" Jan 13 20:32:01.572279 containerd[1444]: time="2025-01-13T20:32:01.572243238Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.572326 containerd[1444]: time="2025-01-13T20:32:01.572298240Z" level=info msg="RemovePodSandbox \"155303b45a5df0fdb386c10f061bf9045f2cc878f270b2c77616029c007bdbe9\" returns successfully" Jan 13 20:32:01.575750 containerd[1444]: time="2025-01-13T20:32:01.575567123Z" level=info msg="StopPodSandbox for \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\"" Jan 13 20:32:01.575750 containerd[1444]: time="2025-01-13T20:32:01.575678208Z" level=info msg="TearDown network for sandbox \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\" successfully" Jan 13 20:32:01.575750 containerd[1444]: time="2025-01-13T20:32:01.575688848Z" level=info msg="StopPodSandbox for \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\" returns successfully" Jan 13 20:32:01.576376 containerd[1444]: time="2025-01-13T20:32:01.576350713Z" level=info msg="RemovePodSandbox for \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\"" Jan 13 20:32:01.576376 containerd[1444]: time="2025-01-13T20:32:01.576388714Z" level=info msg="Forcibly stopping sandbox \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\"" Jan 13 20:32:01.576468 containerd[1444]: time="2025-01-13T20:32:01.576449677Z" level=info msg="TearDown network for sandbox \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\" successfully" Jan 13 20:32:01.578689 containerd[1444]: time="2025-01-13T20:32:01.578653040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:32:01.578788 containerd[1444]: time="2025-01-13T20:32:01.578706762Z" level=info msg="RemovePodSandbox \"407d140ba4e3ab443e4975a4338f8d336b2e7239870e63900342856f649212c3\" returns successfully" Jan 13 20:32:05.225000 systemd[1]: Started sshd@19-10.0.0.144:22-10.0.0.1:37180.service - OpenSSH per-connection server daemon (10.0.0.1:37180). Jan 13 20:32:05.267165 sshd[6062]: Accepted publickey for core from 10.0.0.1 port 37180 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:32:05.268321 sshd-session[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:32:05.272445 systemd-logind[1423]: New session 20 of user core. Jan 13 20:32:05.280525 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 20:32:05.400927 sshd[6064]: Connection closed by 10.0.0.1 port 37180 Jan 13 20:32:05.401495 sshd-session[6062]: pam_unix(sshd:session): session closed for user core Jan 13 20:32:05.404655 systemd-logind[1423]: Session 20 logged out. Waiting for processes to exit. Jan 13 20:32:05.405083 systemd[1]: sshd@19-10.0.0.144:22-10.0.0.1:37180.service: Deactivated successfully. Jan 13 20:32:05.407896 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 20:32:05.408728 systemd-logind[1423]: Removed session 20.