Jan 29 15:56:07.893910 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 15:56:07.893932 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Jan 29 14:53:00 -00 2025 Jan 29 15:56:07.893941 kernel: KASLR enabled Jan 29 15:56:07.893947 kernel: efi: EFI v2.7 by EDK II Jan 29 15:56:07.893953 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 29 15:56:07.893958 kernel: random: crng init done Jan 29 15:56:07.893965 kernel: secureboot: Secure boot disabled Jan 29 15:56:07.893971 kernel: ACPI: Early table checksum verification disabled Jan 29 15:56:07.893976 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 15:56:07.893984 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 15:56:07.893990 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 15:56:07.893996 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 15:56:07.894002 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 15:56:07.894008 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 15:56:07.894015 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 15:56:07.894023 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 15:56:07.894029 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 15:56:07.894035 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 15:56:07.894042 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 15:56:07.894048 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 15:56:07.894054 kernel: NUMA: Failed to initialise from firmware Jan 29 15:56:07.894060 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 15:56:07.894066 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jan 29 15:56:07.894073 kernel: Zone ranges: Jan 29 15:56:07.894079 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 15:56:07.894086 kernel: DMA32 empty Jan 29 15:56:07.894092 kernel: Normal empty Jan 29 15:56:07.894098 kernel: Movable zone start for each node Jan 29 15:56:07.894104 kernel: Early memory node ranges Jan 29 15:56:07.894110 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 29 15:56:07.894116 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 29 15:56:07.894123 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 29 15:56:07.894129 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 15:56:07.894135 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 15:56:07.894141 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 15:56:07.894147 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 15:56:07.894153 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 15:56:07.894160 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 15:56:07.894166 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 15:56:07.894173 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 15:56:07.894181 kernel: psci: probing for conduit method from ACPI. Jan 29 15:56:07.894188 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 15:56:07.894195 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 15:56:07.894202 kernel: psci: Trusted OS migration not required Jan 29 15:56:07.894209 kernel: psci: SMC Calling Convention v1.1 Jan 29 15:56:07.894216 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 15:56:07.894222 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 15:56:07.894228 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 15:56:07.894235 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 15:56:07.894242 kernel: Detected PIPT I-cache on CPU0 Jan 29 15:56:07.894248 kernel: CPU features: detected: GIC system register CPU interface Jan 29 15:56:07.894255 kernel: CPU features: detected: Hardware dirty bit management Jan 29 15:56:07.894261 kernel: CPU features: detected: Spectre-v4 Jan 29 15:56:07.894268 kernel: CPU features: detected: Spectre-BHB Jan 29 15:56:07.894275 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 15:56:07.894282 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 15:56:07.894288 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 15:56:07.894295 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 15:56:07.894301 kernel: alternatives: applying boot alternatives Jan 29 15:56:07.894308 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 15:56:07.894315 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 15:56:07.894322 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 15:56:07.894329 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 15:56:07.894335 kernel: Fallback order for Node 0: 0 Jan 29 15:56:07.894343 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 15:56:07.894349 kernel: Policy zone: DMA Jan 29 15:56:07.894356 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 15:56:07.894362 kernel: software IO TLB: area num 4. Jan 29 15:56:07.894369 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 15:56:07.894376 kernel: Memory: 2387536K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184752K reserved, 0K cma-reserved) Jan 29 15:56:07.894382 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 15:56:07.894389 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 15:56:07.894396 kernel: rcu: RCU event tracing is enabled. Jan 29 15:56:07.894403 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 15:56:07.894409 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 15:56:07.894416 kernel: Tracing variant of Tasks RCU enabled. Jan 29 15:56:07.894424 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 15:56:07.894430 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 15:56:07.894437 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 15:56:07.894443 kernel: GICv3: 256 SPIs implemented Jan 29 15:56:07.894450 kernel: GICv3: 0 Extended SPIs implemented Jan 29 15:56:07.894456 kernel: Root IRQ handler: gic_handle_irq Jan 29 15:56:07.894463 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 15:56:07.894469 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 15:56:07.894476 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 15:56:07.894482 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 15:56:07.894489 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 15:56:07.894497 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 15:56:07.894504 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 15:56:07.894510 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 15:56:07.894517 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 15:56:07.894523 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 15:56:07.894530 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 15:56:07.894537 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 15:56:07.894543 kernel: arm-pv: using stolen time PV Jan 29 15:56:07.894550 kernel: Console: colour dummy device 80x25 Jan 29 15:56:07.894557 kernel: ACPI: Core revision 20230628 Jan 29 15:56:07.894564 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 15:56:07.894571 kernel: pid_max: default: 32768 minimum: 301 Jan 29 15:56:07.894578 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 15:56:07.894632 kernel: landlock: Up and running. Jan 29 15:56:07.894640 kernel: SELinux: Initializing. Jan 29 15:56:07.894647 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 15:56:07.894654 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 15:56:07.894661 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 15:56:07.894668 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 15:56:07.894675 kernel: rcu: Hierarchical SRCU implementation. Jan 29 15:56:07.894684 kernel: rcu: Max phase no-delay instances is 400. Jan 29 15:56:07.894691 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 15:56:07.894698 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 15:56:07.894704 kernel: Remapping and enabling EFI services. Jan 29 15:56:07.894711 kernel: smp: Bringing up secondary CPUs ... Jan 29 15:56:07.894717 kernel: Detected PIPT I-cache on CPU1 Jan 29 15:56:07.894724 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 15:56:07.894731 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 15:56:07.894738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 15:56:07.894746 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 15:56:07.894753 kernel: Detected PIPT I-cache on CPU2 Jan 29 15:56:07.894764 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 15:56:07.894772 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 15:56:07.894779 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 15:56:07.894786 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 15:56:07.894793 kernel: Detected PIPT I-cache on CPU3 Jan 29 15:56:07.894800 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 15:56:07.894808 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 15:56:07.894816 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 15:56:07.894823 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 15:56:07.894830 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 15:56:07.894837 kernel: SMP: Total of 4 processors activated. Jan 29 15:56:07.894844 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 15:56:07.894851 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 15:56:07.894858 kernel: CPU features: detected: Common not Private translations Jan 29 15:56:07.894866 kernel: CPU features: detected: CRC32 instructions Jan 29 15:56:07.894874 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 15:56:07.894881 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 15:56:07.894888 kernel: CPU features: detected: LSE atomic instructions Jan 29 15:56:07.894895 kernel: CPU features: detected: Privileged Access Never Jan 29 15:56:07.894902 kernel: CPU features: detected: RAS Extension Support Jan 29 15:56:07.894909 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 15:56:07.894916 kernel: CPU: All CPU(s) started at EL1 Jan 29 15:56:07.894923 kernel: alternatives: applying system-wide alternatives Jan 29 15:56:07.894930 kernel: devtmpfs: initialized Jan 29 15:56:07.894938 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 15:56:07.894946 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 15:56:07.894953 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 15:56:07.894960 kernel: SMBIOS 3.0.0 present. Jan 29 15:56:07.894967 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 15:56:07.894974 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 15:56:07.894981 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 15:56:07.894989 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 15:56:07.894996 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 15:56:07.895003 kernel: audit: initializing netlink subsys (disabled) Jan 29 15:56:07.895012 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jan 29 15:56:07.895019 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 15:56:07.895026 kernel: cpuidle: using governor menu Jan 29 15:56:07.895034 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 15:56:07.895041 kernel: ASID allocator initialised with 32768 entries Jan 29 15:56:07.895048 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 15:56:07.895055 kernel: Serial: AMBA PL011 UART driver Jan 29 15:56:07.895062 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 15:56:07.895069 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 15:56:07.895077 kernel: Modules: 509280 pages in range for PLT usage Jan 29 15:56:07.895085 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 15:56:07.895092 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 15:56:07.895099 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 15:56:07.895106 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 15:56:07.895113 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 15:56:07.895120 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 15:56:07.895127 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 15:56:07.895134 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 15:56:07.895142 kernel: ACPI: Added _OSI(Module Device) Jan 29 15:56:07.895149 kernel: ACPI: Added _OSI(Processor Device) Jan 29 15:56:07.895156 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 15:56:07.895163 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 15:56:07.895171 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 15:56:07.895178 kernel: ACPI: Interpreter enabled Jan 29 15:56:07.895185 kernel: ACPI: Using GIC for interrupt routing Jan 29 15:56:07.895191 kernel: ACPI: MCFG table detected, 1 entries Jan 29 15:56:07.895199 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 15:56:07.895207 kernel: printk: console [ttyAMA0] enabled Jan 29 15:56:07.895214 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 15:56:07.895347 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 15:56:07.895421 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 15:56:07.895485 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 15:56:07.895547 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 15:56:07.895631 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 15:56:07.895645 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 15:56:07.895652 kernel: PCI host bridge to bus 0000:00 Jan 29 15:56:07.895725 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 15:56:07.895788 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 15:56:07.895848 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 15:56:07.895921 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 15:56:07.896002 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 15:56:07.896082 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 15:56:07.896150 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 15:56:07.896215 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 15:56:07.896280 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 15:56:07.896345 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 15:56:07.896408 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 15:56:07.896475 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 15:56:07.896537 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 15:56:07.896611 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 15:56:07.896688 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 15:56:07.896698 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 15:56:07.896705 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 15:56:07.896712 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 15:56:07.896719 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 15:56:07.896726 kernel: iommu: Default domain type: Translated Jan 29 15:56:07.896737 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 15:56:07.896744 kernel: efivars: Registered efivars operations Jan 29 15:56:07.896752 kernel: vgaarb: loaded Jan 29 15:56:07.896759 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 15:56:07.896766 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 15:56:07.896773 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 15:56:07.896780 kernel: pnp: PnP ACPI init Jan 29 15:56:07.896860 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 15:56:07.896872 kernel: pnp: PnP ACPI: found 1 devices Jan 29 15:56:07.896879 kernel: NET: Registered PF_INET protocol family Jan 29 15:56:07.896886 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 15:56:07.896894 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 15:56:07.896901 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 15:56:07.896908 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 15:56:07.896915 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 15:56:07.896923 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 15:56:07.896930 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 15:56:07.896939 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 15:56:07.896946 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 15:56:07.896953 kernel: PCI: CLS 0 bytes, default 64 Jan 29 15:56:07.896960 kernel: kvm [1]: HYP mode not available Jan 29 15:56:07.896968 kernel: Initialise system trusted keyrings Jan 29 15:56:07.896975 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 15:56:07.896982 kernel: Key type asymmetric registered Jan 29 15:56:07.896989 kernel: Asymmetric key parser 'x509' registered Jan 29 15:56:07.896996 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 15:56:07.897005 kernel: io scheduler mq-deadline registered Jan 29 15:56:07.897012 kernel: io scheduler kyber registered Jan 29 15:56:07.897018 kernel: io scheduler bfq registered Jan 29 15:56:07.897025 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 15:56:07.897032 kernel: ACPI: button: Power Button [PWRB] Jan 29 15:56:07.897040 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 15:56:07.897119 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 15:56:07.897128 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 15:56:07.897135 kernel: thunder_xcv, ver 1.0 Jan 29 15:56:07.897144 kernel: thunder_bgx, ver 1.0 Jan 29 15:56:07.897151 kernel: nicpf, ver 1.0 Jan 29 15:56:07.897158 kernel: nicvf, ver 1.0 Jan 29 15:56:07.897232 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 15:56:07.897294 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T15:56:07 UTC (1738166167) Jan 29 15:56:07.897304 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 15:56:07.897311 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 15:56:07.897319 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 15:56:07.897328 kernel: watchdog: Hard watchdog permanently disabled Jan 29 15:56:07.897335 kernel: NET: Registered PF_INET6 protocol family Jan 29 15:56:07.897342 kernel: Segment Routing with IPv6 Jan 29 15:56:07.897349 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 15:56:07.897356 kernel: NET: Registered PF_PACKET protocol family Jan 29 15:56:07.897363 kernel: Key type dns_resolver registered Jan 29 15:56:07.897370 kernel: registered taskstats version 1 Jan 29 15:56:07.897377 kernel: Loading compiled-in X.509 certificates Jan 29 15:56:07.897385 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6aa2640fb67e4af9702410ddab8a5c8b9fc0d77b' Jan 29 15:56:07.897393 kernel: Key type .fscrypt registered Jan 29 15:56:07.897400 kernel: Key type fscrypt-provisioning registered Jan 29 15:56:07.897407 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 15:56:07.897414 kernel: ima: Allocated hash algorithm: sha1 Jan 29 15:56:07.897421 kernel: ima: No architecture policies found Jan 29 15:56:07.897428 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 15:56:07.897435 kernel: clk: Disabling unused clocks Jan 29 15:56:07.897442 kernel: Freeing unused kernel memory: 38336K Jan 29 15:56:07.897449 kernel: Run /init as init process Jan 29 15:56:07.897458 kernel: with arguments: Jan 29 15:56:07.897465 kernel: /init Jan 29 15:56:07.897472 kernel: with environment: Jan 29 15:56:07.897478 kernel: HOME=/ Jan 29 15:56:07.897485 kernel: TERM=linux Jan 29 15:56:07.897492 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 15:56:07.897500 systemd[1]: Successfully made /usr/ read-only. Jan 29 15:56:07.897510 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 15:56:07.897519 systemd[1]: Detected virtualization kvm. Jan 29 15:56:07.897526 systemd[1]: Detected architecture arm64. Jan 29 15:56:07.897533 systemd[1]: Running in initrd. Jan 29 15:56:07.897541 systemd[1]: No hostname configured, using default hostname. Jan 29 15:56:07.897548 systemd[1]: Hostname set to . Jan 29 15:56:07.897555 systemd[1]: Initializing machine ID from VM UUID. Jan 29 15:56:07.897563 systemd[1]: Queued start job for default target initrd.target. Jan 29 15:56:07.897570 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 15:56:07.897579 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 15:56:07.897606 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 15:56:07.897620 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 15:56:07.897628 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 15:56:07.897636 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 15:56:07.897645 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 15:56:07.897655 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 15:56:07.897663 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 15:56:07.897671 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 15:56:07.897678 systemd[1]: Reached target paths.target - Path Units. Jan 29 15:56:07.897686 systemd[1]: Reached target slices.target - Slice Units. Jan 29 15:56:07.897694 systemd[1]: Reached target swap.target - Swaps. Jan 29 15:56:07.897702 systemd[1]: Reached target timers.target - Timer Units. Jan 29 15:56:07.897709 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 15:56:07.897717 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 15:56:07.897726 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 15:56:07.897733 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 15:56:07.897741 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 15:56:07.897749 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 15:56:07.897757 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 15:56:07.897764 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 15:56:07.897772 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 15:56:07.897780 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 15:56:07.897787 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 15:56:07.897796 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 15:56:07.897804 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 15:56:07.897811 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 15:56:07.897819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 15:56:07.897827 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 15:56:07.897834 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 15:56:07.897843 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 15:56:07.897851 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 15:56:07.897859 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 15:56:07.897867 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 15:56:07.897874 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 15:56:07.897900 systemd-journald[239]: Collecting audit messages is disabled. Jan 29 15:56:07.897920 kernel: Bridge firewalling registered Jan 29 15:56:07.897927 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 15:56:07.897935 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 15:56:07.897943 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 15:56:07.897953 systemd-journald[239]: Journal started Jan 29 15:56:07.897971 systemd-journald[239]: Runtime Journal (/run/log/journal/9cb4aeedd2464bc79bd459346de7a841) is 5.9M, max 47.3M, 41.4M free. Jan 29 15:56:07.872850 systemd-modules-load[240]: Inserted module 'overlay' Jan 29 15:56:07.891230 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 29 15:56:07.901866 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 15:56:07.912742 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 15:56:07.914180 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 15:56:07.915438 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 15:56:07.917787 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 15:56:07.918946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 15:56:07.921698 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 15:56:07.923199 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 15:56:07.925729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 15:56:07.934380 dracut-cmdline[276]: dracut-dracut-053 Jan 29 15:56:07.936628 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 15:56:07.957777 systemd-resolved[278]: Positive Trust Anchors: Jan 29 15:56:07.957794 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 15:56:07.957825 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 15:56:07.963709 systemd-resolved[278]: Defaulting to hostname 'linux'. Jan 29 15:56:07.964883 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 15:56:07.965784 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 15:56:08.002620 kernel: SCSI subsystem initialized Jan 29 15:56:08.007606 kernel: Loading iSCSI transport class v2.0-870. Jan 29 15:56:08.014620 kernel: iscsi: registered transport (tcp) Jan 29 15:56:08.027916 kernel: iscsi: registered transport (qla4xxx) Jan 29 15:56:08.027960 kernel: QLogic iSCSI HBA Driver Jan 29 15:56:08.075534 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 15:56:08.086741 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 15:56:08.102881 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 15:56:08.102916 kernel: device-mapper: uevent: version 1.0.3 Jan 29 15:56:08.103693 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 15:56:08.148620 kernel: raid6: neonx8 gen() 15701 MB/s Jan 29 15:56:08.165622 kernel: raid6: neonx4 gen() 15766 MB/s Jan 29 15:56:08.182617 kernel: raid6: neonx2 gen() 13187 MB/s Jan 29 15:56:08.199615 kernel: raid6: neonx1 gen() 10501 MB/s Jan 29 15:56:08.216617 kernel: raid6: int64x8 gen() 6786 MB/s Jan 29 15:56:08.233615 kernel: raid6: int64x4 gen() 7343 MB/s Jan 29 15:56:08.250616 kernel: raid6: int64x2 gen() 6108 MB/s Jan 29 15:56:08.267601 kernel: raid6: int64x1 gen() 5050 MB/s Jan 29 15:56:08.267620 kernel: raid6: using algorithm neonx4 gen() 15766 MB/s Jan 29 15:56:08.284621 kernel: raid6: .... xor() 12369 MB/s, rmw enabled Jan 29 15:56:08.284645 kernel: raid6: using neon recovery algorithm Jan 29 15:56:08.289604 kernel: xor: measuring software checksum speed Jan 29 15:56:08.289625 kernel: 8regs : 21567 MB/sec Jan 29 15:56:08.291051 kernel: 32regs : 19721 MB/sec Jan 29 15:56:08.291063 kernel: arm64_neon : 27432 MB/sec Jan 29 15:56:08.291077 kernel: xor: using function: arm64_neon (27432 MB/sec) Jan 29 15:56:08.341615 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 15:56:08.351059 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 15:56:08.361759 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 15:56:08.376182 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 29 15:56:08.379834 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 15:56:08.382911 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 15:56:08.402546 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 29 15:56:08.429409 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 15:56:08.445770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 15:56:08.488882 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 15:56:08.503213 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 15:56:08.518918 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 15:56:08.520195 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 15:56:08.522636 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 15:56:08.525385 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 15:56:08.531789 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 15:56:08.540036 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 15:56:08.550547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 15:56:08.556701 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 15:56:08.561390 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 15:56:08.561490 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 15:56:08.561501 kernel: GPT:9289727 != 19775487 Jan 29 15:56:08.561510 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 15:56:08.561520 kernel: GPT:9289727 != 19775487 Jan 29 15:56:08.561528 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 15:56:08.561537 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 15:56:08.550698 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 15:56:08.557765 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 15:56:08.560129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 15:56:08.560258 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 15:56:08.563205 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 15:56:08.569783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 15:56:08.579617 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (534) Jan 29 15:56:08.585610 kernel: BTRFS: device fsid d7b4a0ef-7a03-4a6c-8f31-7cafae04447a devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (526) Jan 29 15:56:08.591292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 15:56:08.601619 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 15:56:08.621300 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 15:56:08.628486 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 15:56:08.634344 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 15:56:08.635259 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 15:56:08.649741 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 15:56:08.651274 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 15:56:08.655340 disk-uuid[562]: Primary Header is updated. Jan 29 15:56:08.655340 disk-uuid[562]: Secondary Entries is updated. Jan 29 15:56:08.655340 disk-uuid[562]: Secondary Header is updated. Jan 29 15:56:08.657785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 15:56:08.675625 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 15:56:09.668627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 15:56:09.671817 disk-uuid[563]: The operation has completed successfully. Jan 29 15:56:09.709047 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 15:56:09.710167 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 15:56:09.742788 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 15:56:09.745492 sh[583]: Success Jan 29 15:56:09.763735 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 15:56:09.796394 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 15:56:09.810060 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 15:56:09.811450 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 15:56:09.827325 kernel: BTRFS info (device dm-0): first mount of filesystem d7b4a0ef-7a03-4a6c-8f31-7cafae04447a Jan 29 15:56:09.827368 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 15:56:09.828311 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 15:56:09.828328 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 15:56:09.828954 kernel: BTRFS info (device dm-0): using free space tree Jan 29 15:56:09.834037 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 15:56:09.834942 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 15:56:09.844766 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 15:56:09.846227 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 15:56:09.863133 kernel: BTRFS info (device vda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 15:56:09.863183 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 15:56:09.863194 kernel: BTRFS info (device vda6): using free space tree Jan 29 15:56:09.866885 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 15:56:09.875980 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 15:56:09.877332 kernel: BTRFS info (device vda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 15:56:09.885787 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 15:56:09.893784 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 15:56:09.981869 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 15:56:09.997802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 15:56:10.035346 systemd-networkd[770]: lo: Link UP Jan 29 15:56:10.035355 systemd-networkd[770]: lo: Gained carrier Jan 29 15:56:10.037214 systemd-networkd[770]: Enumeration completed Jan 29 15:56:10.037323 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 15:56:10.038219 systemd[1]: Reached target network.target - Network. Jan 29 15:56:10.040207 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 15:56:10.040211 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 15:56:10.040915 systemd-networkd[770]: eth0: Link UP Jan 29 15:56:10.040918 systemd-networkd[770]: eth0: Gained carrier Jan 29 15:56:10.040924 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 15:56:10.055396 ignition[684]: Ignition 2.20.0 Jan 29 15:56:10.055406 ignition[684]: Stage: fetch-offline Jan 29 15:56:10.055441 ignition[684]: no configs at "/usr/lib/ignition/base.d" Jan 29 15:56:10.055450 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 15:56:10.055622 ignition[684]: parsed url from cmdline: "" Jan 29 15:56:10.059635 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 15:56:10.055626 ignition[684]: no config URL provided Jan 29 15:56:10.055630 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 15:56:10.055638 ignition[684]: no config at "/usr/lib/ignition/user.ign" Jan 29 15:56:10.055664 ignition[684]: op(1): [started] loading QEMU firmware config module Jan 29 15:56:10.055669 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 15:56:10.067249 ignition[684]: op(1): [finished] loading QEMU firmware config module Jan 29 15:56:10.108418 ignition[684]: parsing config with SHA512: 04745bd894dd652e8821f2cf9ff18428628c595024bf1d7bfca0679cf01888a5dc1e32b9328512e639b5feb2c7f394b243dc065cfa0cfd934831d356c8883e90 Jan 29 15:56:10.113169 unknown[684]: fetched base config from "system" Jan 29 15:56:10.113180 unknown[684]: fetched user config from "qemu" Jan 29 15:56:10.114877 ignition[684]: fetch-offline: fetch-offline passed Jan 29 15:56:10.114984 ignition[684]: Ignition finished successfully Jan 29 15:56:10.117004 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 15:56:10.118043 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 15:56:10.130748 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 15:56:10.144567 ignition[784]: Ignition 2.20.0 Jan 29 15:56:10.144578 ignition[784]: Stage: kargs Jan 29 15:56:10.144774 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 29 15:56:10.144785 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 15:56:10.145707 ignition[784]: kargs: kargs passed Jan 29 15:56:10.147767 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 15:56:10.145756 ignition[784]: Ignition finished successfully Jan 29 15:56:10.155761 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 15:56:10.165487 ignition[792]: Ignition 2.20.0 Jan 29 15:56:10.165497 ignition[792]: Stage: disks Jan 29 15:56:10.165680 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 29 15:56:10.165690 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 15:56:10.166580 ignition[792]: disks: disks passed Jan 29 15:56:10.168401 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 15:56:10.166648 ignition[792]: Ignition finished successfully Jan 29 15:56:10.170272 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 15:56:10.171260 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 15:56:10.172829 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 15:56:10.174086 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 15:56:10.175693 systemd[1]: Reached target basic.target - Basic System. Jan 29 15:56:10.178157 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 15:56:10.191307 systemd-resolved[278]: Detected conflict on linux IN A 10.0.0.7 Jan 29 15:56:10.191318 systemd-resolved[278]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Jan 29 15:56:10.193629 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 15:56:10.196661 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 15:56:10.211718 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 15:56:10.269611 kernel: EXT4-fs (vda9): mounted filesystem 41c89329-6889-4dd8-82a1-efe68f55bab8 r/w with ordered data mode. Quota mode: none. Jan 29 15:56:10.269970 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 15:56:10.271456 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 15:56:10.286675 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 15:56:10.288375 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 15:56:10.289479 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 15:56:10.289579 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 15:56:10.289665 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 15:56:10.298055 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (811) Jan 29 15:56:10.295947 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 15:56:10.298056 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 15:56:10.303256 kernel: BTRFS info (device vda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 15:56:10.303282 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 15:56:10.303293 kernel: BTRFS info (device vda6): using free space tree Jan 29 15:56:10.305606 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 15:56:10.306916 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 15:56:10.342145 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 15:56:10.346621 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jan 29 15:56:10.350713 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 15:56:10.354285 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 15:56:10.422006 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 15:56:10.429715 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 15:56:10.431987 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 15:56:10.436607 kernel: BTRFS info (device vda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 15:56:10.450294 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 15:56:10.455012 ignition[924]: INFO : Ignition 2.20.0 Jan 29 15:56:10.455012 ignition[924]: INFO : Stage: mount Jan 29 15:56:10.457506 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 15:56:10.457506 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 15:56:10.457506 ignition[924]: INFO : mount: mount passed Jan 29 15:56:10.457506 ignition[924]: INFO : Ignition finished successfully Jan 29 15:56:10.457931 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 15:56:10.471688 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 15:56:10.868271 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 15:56:10.881766 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 15:56:10.888359 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (937) Jan 29 15:56:10.888391 kernel: BTRFS info (device vda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 15:56:10.889216 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 15:56:10.889231 kernel: BTRFS info (device vda6): using free space tree Jan 29 15:56:10.891618 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 15:56:10.892918 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 15:56:10.913739 ignition[954]: INFO : Ignition 2.20.0 Jan 29 15:56:10.913739 ignition[954]: INFO : Stage: files Jan 29 15:56:10.915635 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 15:56:10.915635 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 15:56:10.915635 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jan 29 15:56:10.919597 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 15:56:10.919597 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 15:56:10.919597 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 15:56:10.919597 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 15:56:10.919597 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 15:56:10.919597 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 15:56:10.919597 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 29 15:56:10.917991 unknown[954]: wrote ssh authorized keys file for user: core Jan 29 15:56:10.963878 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 15:56:11.089613 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 29 15:56:11.091773 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 15:56:11.091773 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 15:56:11.091773 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 15:56:11.091773 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 15:56:11.091773 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 15:56:11.091773 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 15:56:11.091773 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 15:56:11.091773 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 15:56:11.105334 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 15:56:11.105334 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 15:56:11.105334 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 15:56:11.105334 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 15:56:11.105334 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 15:56:11.105334 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 29 15:56:11.261725 systemd-networkd[770]: eth0: Gained IPv6LL Jan 29 15:56:11.452192 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 15:56:12.024361 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 15:56:12.024361 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 15:56:12.028749 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 15:56:12.028749 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 15:56:12.028749 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 15:56:12.028749 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 15:56:12.028749 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 15:56:12.028749 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 15:56:12.028749 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 15:56:12.028749 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 15:56:12.047816 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 15:56:12.051442 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 15:56:12.053134 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 15:56:12.053134 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 29 15:56:12.053134 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 15:56:12.053134 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 15:56:12.053134 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 15:56:12.053134 ignition[954]: INFO : files: files passed Jan 29 15:56:12.053134 ignition[954]: INFO : Ignition finished successfully Jan 29 15:56:12.054552 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 15:56:12.067821 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 15:56:12.071230 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 15:56:12.075240 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 15:56:12.075350 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 15:56:12.079288 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 15:56:12.080985 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 15:56:12.080985 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 15:56:12.084050 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 15:56:12.083781 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 15:56:12.085303 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 15:56:12.102779 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 15:56:12.123816 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 15:56:12.123948 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 15:56:12.125639 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 15:56:12.126915 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 15:56:12.128360 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 15:56:12.129271 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 15:56:12.146623 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 15:56:12.160837 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 15:56:12.169047 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 15:56:12.170409 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 15:56:12.172127 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 15:56:12.173819 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 15:56:12.173949 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 15:56:12.175955 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 15:56:12.177519 systemd[1]: Stopped target basic.target - Basic System. Jan 29 15:56:12.178816 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 15:56:12.180191 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 15:56:12.181724 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 15:56:12.183320 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 15:56:12.185003 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 15:56:12.186554 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 15:56:12.188216 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 15:56:12.189540 systemd[1]: Stopped target swap.target - Swaps. Jan 29 15:56:12.190748 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 15:56:12.190886 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 15:56:12.192823 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 15:56:12.194499 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 15:56:12.196079 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 15:56:12.199661 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 15:56:12.200674 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 15:56:12.200798 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 15:56:12.203158 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 15:56:12.203280 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 15:56:12.205103 systemd[1]: Stopped target paths.target - Path Units. Jan 29 15:56:12.206323 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 15:56:12.209642 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 15:56:12.210905 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 15:56:12.212624 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 15:56:12.213885 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 15:56:12.213975 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 15:56:12.215208 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 15:56:12.215283 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 15:56:12.216721 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 15:56:12.216882 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 15:56:12.218209 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 15:56:12.218312 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 15:56:12.234818 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 15:56:12.235515 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 15:56:12.235679 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 15:56:12.240840 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 15:56:12.241797 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 15:56:12.241936 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 15:56:12.244923 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 15:56:12.245046 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 15:56:12.249651 ignition[1010]: INFO : Ignition 2.20.0 Jan 29 15:56:12.249651 ignition[1010]: INFO : Stage: umount Jan 29 15:56:12.249651 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 15:56:12.249651 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 15:56:12.253503 ignition[1010]: INFO : umount: umount passed Jan 29 15:56:12.253503 ignition[1010]: INFO : Ignition finished successfully Jan 29 15:56:12.251071 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 15:56:12.251172 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 15:56:12.254437 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 15:56:12.254533 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 15:56:12.257732 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 15:56:12.258566 systemd[1]: Stopped target network.target - Network. Jan 29 15:56:12.260159 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 15:56:12.260252 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 15:56:12.261555 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 15:56:12.261626 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 15:56:12.263029 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 15:56:12.263079 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 15:56:12.264283 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 15:56:12.264325 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 15:56:12.265918 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 15:56:12.267143 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 15:56:12.269063 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 15:56:12.269157 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 15:56:12.271462 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 15:56:12.271600 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 15:56:12.275935 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 15:56:12.276045 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 15:56:12.279066 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 15:56:12.279298 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 15:56:12.279384 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 15:56:12.281990 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 15:56:12.282912 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 15:56:12.282953 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 15:56:12.294715 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 15:56:12.295402 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 15:56:12.295459 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 15:56:12.297007 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 15:56:12.297046 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 15:56:12.299329 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 15:56:12.299372 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 15:56:12.300910 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 15:56:12.300952 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 15:56:12.303193 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 15:56:12.305423 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 15:56:12.305479 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 15:56:12.312820 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 15:56:12.313658 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 15:56:12.314738 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 15:56:12.314860 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 15:56:12.317107 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 15:56:12.317224 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 15:56:12.318692 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 15:56:12.318721 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 15:56:12.320380 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 15:56:12.320422 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 15:56:12.323075 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 15:56:12.323120 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 15:56:12.325337 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 15:56:12.325378 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 15:56:12.337757 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 15:56:12.338601 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 15:56:12.338667 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 15:56:12.341219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 15:56:12.341265 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 15:56:12.344554 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 29 15:56:12.344643 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 29 15:56:12.344965 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 15:56:12.345046 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 15:56:12.346897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 15:56:12.348917 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 15:56:12.357683 systemd[1]: Switching root. Jan 29 15:56:12.386328 systemd-journald[239]: Journal stopped Jan 29 15:56:13.129291 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 29 15:56:13.129349 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 15:56:13.129361 kernel: SELinux: policy capability open_perms=1 Jan 29 15:56:13.129371 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 15:56:13.129381 kernel: SELinux: policy capability always_check_network=0 Jan 29 15:56:13.129391 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 15:56:13.129401 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 15:56:13.129415 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 15:56:13.129428 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 15:56:13.129438 kernel: audit: type=1403 audit(1738166172.533:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 15:56:13.129449 systemd[1]: Successfully loaded SELinux policy in 31.681ms. Jan 29 15:56:13.129466 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.521ms. Jan 29 15:56:13.129479 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 15:56:13.129491 systemd[1]: Detected virtualization kvm. Jan 29 15:56:13.129501 systemd[1]: Detected architecture arm64. Jan 29 15:56:13.129511 systemd[1]: Detected first boot. Jan 29 15:56:13.129521 systemd[1]: Initializing machine ID from VM UUID. Jan 29 15:56:13.129534 zram_generator::config[1058]: No configuration found. Jan 29 15:56:13.129545 kernel: NET: Registered PF_VSOCK protocol family Jan 29 15:56:13.129555 systemd[1]: Populated /etc with preset unit settings. Jan 29 15:56:13.129572 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 15:56:13.129598 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 15:56:13.129612 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 15:56:13.129623 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 15:56:13.129633 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 15:56:13.129646 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 15:56:13.129657 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 15:56:13.129667 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 15:56:13.129681 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 15:56:13.129693 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 15:56:13.129703 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 15:56:13.129713 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 15:56:13.129724 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 15:56:13.129736 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 15:56:13.129748 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 15:56:13.129759 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 15:56:13.129770 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 15:56:13.129780 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 15:56:13.129790 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 15:56:13.129801 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 15:56:13.129812 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 15:56:13.129824 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 15:56:13.129834 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 15:56:13.129845 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 15:56:13.129855 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 15:56:13.129868 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 15:56:13.129879 systemd[1]: Reached target slices.target - Slice Units. Jan 29 15:56:13.129890 systemd[1]: Reached target swap.target - Swaps. Jan 29 15:56:13.129900 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 15:56:13.129911 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 15:56:13.129923 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 15:56:13.129933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 15:56:13.129944 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 15:56:13.129955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 15:56:13.129965 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 15:56:13.129976 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 15:56:13.129986 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 15:56:13.129997 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 15:56:13.130008 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 15:56:13.130020 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 15:56:13.130031 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 15:56:13.130042 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 15:56:13.130053 systemd[1]: Reached target machines.target - Containers. Jan 29 15:56:13.130063 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 15:56:13.130074 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 15:56:13.130085 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 15:56:13.130095 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 15:56:13.130106 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 15:56:13.130118 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 15:56:13.130128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 15:56:13.130139 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 15:56:13.130150 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 15:56:13.130161 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 15:56:13.130172 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 15:56:13.130182 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 15:56:13.130193 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 15:56:13.130204 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 15:56:13.130216 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 15:56:13.130226 kernel: fuse: init (API version 7.39) Jan 29 15:56:13.130237 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 15:56:13.130247 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 15:56:13.130257 kernel: loop: module loaded Jan 29 15:56:13.130269 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 15:56:13.130282 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 15:56:13.130292 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 15:56:13.130305 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 15:56:13.130315 kernel: ACPI: bus type drm_connector registered Jan 29 15:56:13.130325 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 15:56:13.130335 systemd[1]: Stopped verity-setup.service. Jan 29 15:56:13.130346 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 15:56:13.130358 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 15:56:13.130368 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 15:56:13.130378 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 15:56:13.130389 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 15:56:13.130399 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 15:56:13.130409 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 15:56:13.130438 systemd-journald[1130]: Collecting audit messages is disabled. Jan 29 15:56:13.130462 systemd-journald[1130]: Journal started Jan 29 15:56:13.130484 systemd-journald[1130]: Runtime Journal (/run/log/journal/9cb4aeedd2464bc79bd459346de7a841) is 5.9M, max 47.3M, 41.4M free. Jan 29 15:56:12.939512 systemd[1]: Queued start job for default target multi-user.target. Jan 29 15:56:12.953718 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 15:56:12.954132 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 15:56:13.131606 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 15:56:13.134126 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 15:56:13.135843 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 15:56:13.138726 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 15:56:13.139945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 15:56:13.140128 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 15:56:13.141339 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 15:56:13.141534 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 15:56:13.142904 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 15:56:13.143085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 15:56:13.144296 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 15:56:13.144486 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 15:56:13.145704 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 15:56:13.145880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 15:56:13.147015 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 15:56:13.148214 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 15:56:13.149506 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 15:56:13.151036 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 15:56:13.165047 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 15:56:13.173676 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 15:56:13.175653 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 15:56:13.176494 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 15:56:13.176533 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 15:56:13.178306 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 15:56:13.180365 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 15:56:13.182308 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 15:56:13.183258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 15:56:13.184483 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 15:56:13.186419 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 15:56:13.187375 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 15:56:13.190788 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 15:56:13.194103 systemd-journald[1130]: Time spent on flushing to /var/log/journal/9cb4aeedd2464bc79bd459346de7a841 is 11.242ms for 867 entries. Jan 29 15:56:13.194103 systemd-journald[1130]: System Journal (/var/log/journal/9cb4aeedd2464bc79bd459346de7a841) is 8M, max 195.6M, 187.6M free. Jan 29 15:56:13.221778 systemd-journald[1130]: Received client request to flush runtime journal. Jan 29 15:56:13.191720 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 15:56:13.195506 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 15:56:13.212327 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 15:56:13.215140 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 15:56:13.218333 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 15:56:13.219500 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 15:56:13.221640 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 15:56:13.222876 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 15:56:13.224299 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 15:56:13.227190 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 15:56:13.229600 kernel: loop0: detected capacity change from 0 to 123192 Jan 29 15:56:13.230353 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 15:56:13.235192 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 15:56:13.245809 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 15:56:13.250424 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 15:56:13.252074 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 15:56:13.256777 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 15:56:13.258996 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 15:56:13.262721 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 15:56:13.265535 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 15:56:13.292305 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 29 15:56:13.292323 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 29 15:56:13.296622 kernel: loop1: detected capacity change from 0 to 201592 Jan 29 15:56:13.300259 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 15:56:13.329612 kernel: loop2: detected capacity change from 0 to 113512 Jan 29 15:56:13.358614 kernel: loop3: detected capacity change from 0 to 123192 Jan 29 15:56:13.364604 kernel: loop4: detected capacity change from 0 to 201592 Jan 29 15:56:13.370625 kernel: loop5: detected capacity change from 0 to 113512 Jan 29 15:56:13.373264 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 15:56:13.373714 (sd-merge)[1201]: Merged extensions into '/usr'. Jan 29 15:56:13.376909 systemd[1]: Reload requested from client PID 1177 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 15:56:13.376927 systemd[1]: Reloading... Jan 29 15:56:13.440669 zram_generator::config[1229]: No configuration found. Jan 29 15:56:13.468270 ldconfig[1171]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 15:56:13.537031 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 15:56:13.586837 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 15:56:13.587129 systemd[1]: Reloading finished in 209 ms. Jan 29 15:56:13.604341 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 15:56:13.605629 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 15:56:13.624001 systemd[1]: Starting ensure-sysext.service... Jan 29 15:56:13.625701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 15:56:13.637367 systemd[1]: Reload requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Jan 29 15:56:13.637379 systemd[1]: Reloading... Jan 29 15:56:13.643176 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 15:56:13.643385 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 15:56:13.644013 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 15:56:13.644207 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 29 15:56:13.644251 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 29 15:56:13.646650 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 15:56:13.646662 systemd-tmpfiles[1265]: Skipping /boot Jan 29 15:56:13.654895 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 15:56:13.654911 systemd-tmpfiles[1265]: Skipping /boot Jan 29 15:56:13.684659 zram_generator::config[1294]: No configuration found. Jan 29 15:56:13.762523 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 15:56:13.812758 systemd[1]: Reloading finished in 175 ms. Jan 29 15:56:13.827667 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 15:56:13.844739 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 15:56:13.854354 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 15:56:13.856710 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 15:56:13.857642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 15:56:13.858812 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 15:56:13.863882 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 15:56:13.865851 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 15:56:13.867047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 15:56:13.867166 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 15:56:13.869195 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 15:56:13.876170 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 15:56:13.882757 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 15:56:13.885657 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 15:56:13.889405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 15:56:13.889627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 15:56:13.893036 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 15:56:13.893184 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 15:56:13.894604 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 15:56:13.894744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 15:56:13.900090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 15:56:13.905999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 15:56:13.908224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 15:56:13.911424 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 15:56:13.912523 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 15:56:13.912655 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 15:56:13.914071 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Jan 29 15:56:13.914934 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 15:56:13.916911 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 15:56:13.921624 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 15:56:13.923351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 15:56:13.923516 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 15:56:13.927085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 15:56:13.927255 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 15:56:13.928667 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 15:56:13.928836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 15:56:13.941387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 15:56:13.961473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 15:56:13.967431 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 15:56:13.971917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 15:56:13.982135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 15:56:13.983826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 15:56:13.983971 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 15:56:13.989361 augenrules[1392]: No rules Jan 29 15:56:13.989885 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 15:56:13.990800 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 15:56:13.993496 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 15:56:13.995050 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 15:56:13.997172 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 15:56:13.998631 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 15:56:14.000275 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 15:56:14.002265 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 15:56:14.002429 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 15:56:14.003969 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 15:56:14.004125 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 15:56:14.005712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 15:56:14.005889 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 15:56:14.007324 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 15:56:14.007476 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 15:56:14.017913 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 15:56:14.020626 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1385) Jan 29 15:56:14.025992 systemd[1]: Finished ensure-sysext.service. Jan 29 15:56:14.056849 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 15:56:14.063979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 15:56:14.076816 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 15:56:14.080303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 15:56:14.081450 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 15:56:14.081521 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 15:56:14.083882 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 15:56:14.114243 systemd-resolved[1342]: Positive Trust Anchors: Jan 29 15:56:14.114259 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 15:56:14.114291 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 15:56:14.117799 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 15:56:14.125341 systemd-resolved[1342]: Defaulting to hostname 'linux'. Jan 29 15:56:14.133741 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 15:56:14.137058 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 15:56:14.147871 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 15:56:14.157003 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 15:56:14.165927 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 15:56:14.170468 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 15:56:14.171418 systemd-networkd[1419]: lo: Link UP Jan 29 15:56:14.171431 systemd-networkd[1419]: lo: Gained carrier Jan 29 15:56:14.171917 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 15:56:14.172337 systemd-networkd[1419]: Enumeration completed Jan 29 15:56:14.172733 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 15:56:14.172787 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 15:56:14.172791 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 15:56:14.173271 systemd-networkd[1419]: eth0: Link UP Jan 29 15:56:14.173274 systemd-networkd[1419]: eth0: Gained carrier Jan 29 15:56:14.173287 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 15:56:14.173955 systemd[1]: Reached target network.target - Network. Jan 29 15:56:14.176290 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 15:56:14.178442 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 15:56:14.180814 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 15:56:14.191906 systemd-networkd[1419]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 15:56:14.193497 systemd-timesyncd[1420]: Network configuration changed, trying to establish connection. Jan 29 15:56:14.194306 systemd-timesyncd[1420]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 15:56:14.194361 systemd-timesyncd[1420]: Initial clock synchronization to Wed 2025-01-29 15:56:14.176205 UTC. Jan 29 15:56:14.197684 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 15:56:14.208040 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 15:56:14.211616 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 15:56:14.213384 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 15:56:14.214474 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 15:56:14.215507 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 15:56:14.216648 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 15:56:14.217881 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 15:56:14.218974 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 15:56:14.220034 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 15:56:14.221028 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 15:56:14.221061 systemd[1]: Reached target paths.target - Path Units. Jan 29 15:56:14.221835 systemd[1]: Reached target timers.target - Timer Units. Jan 29 15:56:14.223613 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 15:56:14.225947 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 15:56:14.228955 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 15:56:14.230207 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 15:56:14.231326 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 15:56:14.235471 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 15:56:14.236988 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 15:56:14.239156 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 15:56:14.240652 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 15:56:14.241582 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 15:56:14.242352 systemd[1]: Reached target basic.target - Basic System. Jan 29 15:56:14.243261 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 15:56:14.243293 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 15:56:14.244236 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 15:56:14.246141 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 15:56:14.247861 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 15:56:14.249058 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 15:56:14.252841 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 15:56:14.253693 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 15:56:14.258200 jq[1445]: false Jan 29 15:56:14.259497 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 15:56:14.263520 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 15:56:14.266790 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 15:56:14.268832 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 15:56:14.272548 extend-filesystems[1446]: Found loop3 Jan 29 15:56:14.272548 extend-filesystems[1446]: Found loop4 Jan 29 15:56:14.272548 extend-filesystems[1446]: Found loop5 Jan 29 15:56:14.272548 extend-filesystems[1446]: Found vda Jan 29 15:56:14.272548 extend-filesystems[1446]: Found vda1 Jan 29 15:56:14.272548 extend-filesystems[1446]: Found vda2 Jan 29 15:56:14.272548 extend-filesystems[1446]: Found vda3 Jan 29 15:56:14.272548 extend-filesystems[1446]: Found usr Jan 29 15:56:14.272548 extend-filesystems[1446]: Found vda4 Jan 29 15:56:14.272548 extend-filesystems[1446]: Found vda6 Jan 29 15:56:14.272548 extend-filesystems[1446]: Found vda7 Jan 29 15:56:14.272548 extend-filesystems[1446]: Found vda9 Jan 29 15:56:14.272548 extend-filesystems[1446]: Checking size of /dev/vda9 Jan 29 15:56:14.275119 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 15:56:14.281853 dbus-daemon[1444]: [system] SELinux support is enabled Jan 29 15:56:14.277180 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 15:56:14.277707 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 15:56:14.278374 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 15:56:14.281841 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 15:56:14.283567 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 15:56:14.286753 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 15:56:14.292786 jq[1461]: true Jan 29 15:56:14.293005 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 15:56:14.293178 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 15:56:14.294961 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 15:56:14.295142 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 15:56:14.296234 extend-filesystems[1446]: Resized partition /dev/vda9 Jan 29 15:56:14.299801 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Jan 29 15:56:14.302612 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 15:56:14.303373 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 15:56:14.303939 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 15:56:14.315979 jq[1467]: true Jan 29 15:56:14.320847 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 15:56:14.324658 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1402) Jan 29 15:56:14.320905 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 15:56:14.324798 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 15:56:14.324819 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 15:56:14.338289 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 15:56:14.344893 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 15:56:14.348819 update_engine[1460]: I20250129 15:56:14.348523 1460 main.cc:92] Flatcar Update Engine starting Jan 29 15:56:14.354307 systemd[1]: Started update-engine.service - Update Engine. Jan 29 15:56:14.355593 update_engine[1460]: I20250129 15:56:14.355501 1460 update_check_scheduler.cc:74] Next update check in 8m29s Jan 29 15:56:14.356429 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 15:56:14.356429 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 15:56:14.356429 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 15:56:14.362057 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Jan 29 15:56:14.359034 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 15:56:14.360426 systemd-logind[1458]: New seat seat0. Jan 29 15:56:14.363742 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 15:56:14.364840 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 15:56:14.365994 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 15:56:14.367616 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 15:56:14.370372 tar[1464]: linux-arm64/LICENSE Jan 29 15:56:14.370372 tar[1464]: linux-arm64/helm Jan 29 15:56:14.399874 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Jan 29 15:56:14.405711 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 15:56:14.408186 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 15:56:14.439029 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 15:56:14.601694 containerd[1473]: time="2025-01-29T15:56:14.601501480Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 15:56:14.631886 containerd[1473]: time="2025-01-29T15:56:14.631827000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 15:56:14.633738 containerd[1473]: time="2025-01-29T15:56:14.633700120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 15:56:14.633940 containerd[1473]: time="2025-01-29T15:56:14.633918840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 15:56:14.634076 containerd[1473]: time="2025-01-29T15:56:14.634057080Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 15:56:14.634503 containerd[1473]: time="2025-01-29T15:56:14.634480320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 15:56:14.634694 containerd[1473]: time="2025-01-29T15:56:14.634673680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 15:56:14.635627 containerd[1473]: time="2025-01-29T15:56:14.634818000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 15:56:14.635627 containerd[1473]: time="2025-01-29T15:56:14.634837240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 15:56:14.635627 containerd[1473]: time="2025-01-29T15:56:14.635060240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 15:56:14.635627 containerd[1473]: time="2025-01-29T15:56:14.635075560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 15:56:14.635627 containerd[1473]: time="2025-01-29T15:56:14.635088280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 15:56:14.635627 containerd[1473]: time="2025-01-29T15:56:14.635098160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 15:56:14.635627 containerd[1473]: time="2025-01-29T15:56:14.635166720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 15:56:14.635627 containerd[1473]: time="2025-01-29T15:56:14.635350760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 15:56:14.635627 containerd[1473]: time="2025-01-29T15:56:14.635468200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 15:56:14.635627 containerd[1473]: time="2025-01-29T15:56:14.635481560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 15:56:14.635627 containerd[1473]: time="2025-01-29T15:56:14.635564080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 15:56:14.635878 containerd[1473]: time="2025-01-29T15:56:14.635640880Z" level=info msg="metadata content store policy set" policy=shared Jan 29 15:56:14.639294 containerd[1473]: time="2025-01-29T15:56:14.639216280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 15:56:14.639294 containerd[1473]: time="2025-01-29T15:56:14.639275760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 15:56:14.639294 containerd[1473]: time="2025-01-29T15:56:14.639293560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 15:56:14.639410 containerd[1473]: time="2025-01-29T15:56:14.639319080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 15:56:14.639410 containerd[1473]: time="2025-01-29T15:56:14.639333400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 15:56:14.639757 containerd[1473]: time="2025-01-29T15:56:14.639716280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 15:56:14.640172 containerd[1473]: time="2025-01-29T15:56:14.640149800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 15:56:14.640293 containerd[1473]: time="2025-01-29T15:56:14.640275120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 15:56:14.640334 containerd[1473]: time="2025-01-29T15:56:14.640298200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 15:56:14.640334 containerd[1473]: time="2025-01-29T15:56:14.640315200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 15:56:14.640334 containerd[1473]: time="2025-01-29T15:56:14.640329400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 15:56:14.640384 containerd[1473]: time="2025-01-29T15:56:14.640342640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 15:56:14.640384 containerd[1473]: time="2025-01-29T15:56:14.640354920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 15:56:14.640384 containerd[1473]: time="2025-01-29T15:56:14.640368440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 15:56:14.640433 containerd[1473]: time="2025-01-29T15:56:14.640383880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 15:56:14.640433 containerd[1473]: time="2025-01-29T15:56:14.640397800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 15:56:14.640433 containerd[1473]: time="2025-01-29T15:56:14.640410960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 15:56:14.640433 containerd[1473]: time="2025-01-29T15:56:14.640422200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 15:56:14.640496 containerd[1473]: time="2025-01-29T15:56:14.640442960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640496 containerd[1473]: time="2025-01-29T15:56:14.640456440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640496 containerd[1473]: time="2025-01-29T15:56:14.640469320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640496 containerd[1473]: time="2025-01-29T15:56:14.640484280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640574 containerd[1473]: time="2025-01-29T15:56:14.640496400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640574 containerd[1473]: time="2025-01-29T15:56:14.640509040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640574 containerd[1473]: time="2025-01-29T15:56:14.640520600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640574 containerd[1473]: time="2025-01-29T15:56:14.640533520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640574 containerd[1473]: time="2025-01-29T15:56:14.640545960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640574 containerd[1473]: time="2025-01-29T15:56:14.640574880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640707 containerd[1473]: time="2025-01-29T15:56:14.640601720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640707 containerd[1473]: time="2025-01-29T15:56:14.640613840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640707 containerd[1473]: time="2025-01-29T15:56:14.640625320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640707 containerd[1473]: time="2025-01-29T15:56:14.640639960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 15:56:14.640707 containerd[1473]: time="2025-01-29T15:56:14.640665280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640707 containerd[1473]: time="2025-01-29T15:56:14.640677720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640707 containerd[1473]: time="2025-01-29T15:56:14.640689520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 15:56:14.640884 containerd[1473]: time="2025-01-29T15:56:14.640871160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 15:56:14.640909 containerd[1473]: time="2025-01-29T15:56:14.640891000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 15:56:14.640909 containerd[1473]: time="2025-01-29T15:56:14.640901800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 15:56:14.640959 containerd[1473]: time="2025-01-29T15:56:14.640913560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 15:56:14.640959 containerd[1473]: time="2025-01-29T15:56:14.640922560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.640959 containerd[1473]: time="2025-01-29T15:56:14.640936120Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 15:56:14.640959 containerd[1473]: time="2025-01-29T15:56:14.640947040Z" level=info msg="NRI interface is disabled by configuration." Jan 29 15:56:14.641025 containerd[1473]: time="2025-01-29T15:56:14.640964520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 15:56:14.641372 containerd[1473]: time="2025-01-29T15:56:14.641309080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 15:56:14.641372 containerd[1473]: time="2025-01-29T15:56:14.641364280Z" level=info msg="Connect containerd service" Jan 29 15:56:14.641511 containerd[1473]: time="2025-01-29T15:56:14.641398440Z" level=info msg="using legacy CRI server" Jan 29 15:56:14.641511 containerd[1473]: time="2025-01-29T15:56:14.641405920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 15:56:14.641746 containerd[1473]: time="2025-01-29T15:56:14.641728760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 15:56:14.642389 containerd[1473]: time="2025-01-29T15:56:14.642363160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 15:56:14.642715 containerd[1473]: time="2025-01-29T15:56:14.642658640Z" level=info msg="Start subscribing containerd event" Jan 29 15:56:14.643648 containerd[1473]: time="2025-01-29T15:56:14.643341000Z" level=info msg="Start recovering state" Jan 29 15:56:14.643648 containerd[1473]: time="2025-01-29T15:56:14.643435360Z" level=info msg="Start event monitor" Jan 29 15:56:14.643648 containerd[1473]: time="2025-01-29T15:56:14.643448200Z" level=info msg="Start snapshots syncer" Jan 29 15:56:14.643648 containerd[1473]: time="2025-01-29T15:56:14.643463800Z" level=info msg="Start cni network conf syncer for default" Jan 29 15:56:14.643648 containerd[1473]: time="2025-01-29T15:56:14.643472400Z" level=info msg="Start streaming server" Jan 29 15:56:14.643648 containerd[1473]: time="2025-01-29T15:56:14.643600480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 15:56:14.643648 containerd[1473]: time="2025-01-29T15:56:14.643654200Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 15:56:14.643817 containerd[1473]: time="2025-01-29T15:56:14.643719520Z" level=info msg="containerd successfully booted in 0.043706s" Jan 29 15:56:14.643812 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 15:56:14.744794 tar[1464]: linux-arm64/README.md Jan 29 15:56:14.760630 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 15:56:14.929633 sshd_keygen[1482]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 15:56:14.948794 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 15:56:14.965881 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 15:56:14.971285 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 15:56:14.971518 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 15:56:14.974619 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 15:56:14.986845 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 15:56:14.991846 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 15:56:14.993839 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 15:56:14.995096 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 15:56:16.125720 systemd-networkd[1419]: eth0: Gained IPv6LL Jan 29 15:56:16.128514 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 15:56:16.130246 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 15:56:16.143889 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 15:56:16.146758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 15:56:16.148872 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 15:56:16.171371 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 15:56:16.172425 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 15:56:16.176199 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 15:56:16.181315 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 15:56:16.676427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 15:56:16.677724 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 15:56:16.679901 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 15:56:16.684919 systemd[1]: Startup finished in 531ms (kernel) + 4.827s (initrd) + 4.191s (userspace) = 9.550s. Jan 29 15:56:17.071778 kubelet[1557]: E0129 15:56:17.071664 1557 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 15:56:17.074147 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 15:56:17.074300 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 15:56:17.074636 systemd[1]: kubelet.service: Consumed 802ms CPU time, 251.1M memory peak. Jan 29 15:56:20.352022 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 15:56:20.353214 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:50340.service - OpenSSH per-connection server daemon (10.0.0.1:50340). Jan 29 15:56:20.414168 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 50340 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:56:20.415865 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:56:20.425612 systemd-logind[1458]: New session 1 of user core. Jan 29 15:56:20.426482 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 15:56:20.439861 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 15:56:20.448985 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 15:56:20.452101 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 15:56:20.458578 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 15:56:20.460714 systemd-logind[1458]: New session c1 of user core. Jan 29 15:56:20.579364 systemd[1574]: Queued start job for default target default.target. Jan 29 15:56:20.587514 systemd[1574]: Created slice app.slice - User Application Slice. Jan 29 15:56:20.587544 systemd[1574]: Reached target paths.target - Paths. Jan 29 15:56:20.587616 systemd[1574]: Reached target timers.target - Timers. Jan 29 15:56:20.588897 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 15:56:20.597519 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 15:56:20.597613 systemd[1574]: Reached target sockets.target - Sockets. Jan 29 15:56:20.597657 systemd[1574]: Reached target basic.target - Basic System. Jan 29 15:56:20.597685 systemd[1574]: Reached target default.target - Main User Target. Jan 29 15:56:20.597713 systemd[1574]: Startup finished in 131ms. Jan 29 15:56:20.597828 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 15:56:20.599183 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 15:56:20.662379 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:50350.service - OpenSSH per-connection server daemon (10.0.0.1:50350). Jan 29 15:56:20.704749 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 50350 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:56:20.706380 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:56:20.711762 systemd-logind[1458]: New session 2 of user core. Jan 29 15:56:20.721787 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 15:56:20.772310 sshd[1587]: Connection closed by 10.0.0.1 port 50350 Jan 29 15:56:20.772819 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Jan 29 15:56:20.785026 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:50350.service: Deactivated successfully. Jan 29 15:56:20.787875 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 15:56:20.788605 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Jan 29 15:56:20.790282 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:50364.service - OpenSSH per-connection server daemon (10.0.0.1:50364). Jan 29 15:56:20.791890 systemd-logind[1458]: Removed session 2. Jan 29 15:56:20.833514 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 50364 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:56:20.834789 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:56:20.838947 systemd-logind[1458]: New session 3 of user core. Jan 29 15:56:20.850802 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 15:56:20.897884 sshd[1595]: Connection closed by 10.0.0.1 port 50364 Jan 29 15:56:20.898397 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Jan 29 15:56:20.907570 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:50364.service: Deactivated successfully. Jan 29 15:56:20.908995 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 15:56:20.910212 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Jan 29 15:56:20.917886 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:50372.service - OpenSSH per-connection server daemon (10.0.0.1:50372). Jan 29 15:56:20.918770 systemd-logind[1458]: Removed session 3. Jan 29 15:56:20.957778 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 50372 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:56:20.958915 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:56:20.963345 systemd-logind[1458]: New session 4 of user core. Jan 29 15:56:20.968734 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 15:56:21.020547 sshd[1603]: Connection closed by 10.0.0.1 port 50372 Jan 29 15:56:21.020937 sshd-session[1600]: pam_unix(sshd:session): session closed for user core Jan 29 15:56:21.040898 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:50372.service: Deactivated successfully. Jan 29 15:56:21.042400 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 15:56:21.043071 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Jan 29 15:56:21.056866 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:50380.service - OpenSSH per-connection server daemon (10.0.0.1:50380). Jan 29 15:56:21.057856 systemd-logind[1458]: Removed session 4. Jan 29 15:56:21.095813 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 50380 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:56:21.097073 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:56:21.101516 systemd-logind[1458]: New session 5 of user core. Jan 29 15:56:21.110736 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 15:56:21.173555 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 15:56:21.173896 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 15:56:21.185490 sudo[1612]: pam_unix(sudo:session): session closed for user root Jan 29 15:56:21.187057 sshd[1611]: Connection closed by 10.0.0.1 port 50380 Jan 29 15:56:21.187683 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Jan 29 15:56:21.202678 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:50380.service: Deactivated successfully. Jan 29 15:56:21.204853 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 15:56:21.205677 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Jan 29 15:56:21.207677 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:50382.service - OpenSSH per-connection server daemon (10.0.0.1:50382). Jan 29 15:56:21.208389 systemd-logind[1458]: Removed session 5. Jan 29 15:56:21.250500 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 50382 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:56:21.251725 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:56:21.256250 systemd-logind[1458]: New session 6 of user core. Jan 29 15:56:21.262735 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 15:56:21.313843 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 15:56:21.314104 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 15:56:21.316970 sudo[1622]: pam_unix(sudo:session): session closed for user root Jan 29 15:56:21.321222 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 15:56:21.321718 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 15:56:21.338923 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 15:56:21.360544 augenrules[1644]: No rules Jan 29 15:56:21.362002 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 15:56:21.362233 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 15:56:21.363081 sudo[1621]: pam_unix(sudo:session): session closed for user root Jan 29 15:56:21.364627 sshd[1620]: Connection closed by 10.0.0.1 port 50382 Jan 29 15:56:21.364654 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Jan 29 15:56:21.380620 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:50382.service: Deactivated successfully. Jan 29 15:56:21.382032 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 15:56:21.383255 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Jan 29 15:56:21.398881 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:50386.service - OpenSSH per-connection server daemon (10.0.0.1:50386). Jan 29 15:56:21.399824 systemd-logind[1458]: Removed session 6. Jan 29 15:56:21.436480 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 50386 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:56:21.437916 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:56:21.442614 systemd-logind[1458]: New session 7 of user core. Jan 29 15:56:21.451790 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 15:56:21.502424 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 15:56:21.502719 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 15:56:21.832901 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 15:56:21.832912 (dockerd)[1676]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 15:56:22.093163 dockerd[1676]: time="2025-01-29T15:56:22.093040015Z" level=info msg="Starting up" Jan 29 15:56:22.245995 dockerd[1676]: time="2025-01-29T15:56:22.245942198Z" level=info msg="Loading containers: start." Jan 29 15:56:22.386609 kernel: Initializing XFRM netlink socket Jan 29 15:56:22.463582 systemd-networkd[1419]: docker0: Link UP Jan 29 15:56:22.494835 dockerd[1676]: time="2025-01-29T15:56:22.494782141Z" level=info msg="Loading containers: done." Jan 29 15:56:22.509207 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1686398111-merged.mount: Deactivated successfully. Jan 29 15:56:22.512893 dockerd[1676]: time="2025-01-29T15:56:22.512842874Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 15:56:22.512979 dockerd[1676]: time="2025-01-29T15:56:22.512945547Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 15:56:22.513149 dockerd[1676]: time="2025-01-29T15:56:22.513118429Z" level=info msg="Daemon has completed initialization" Jan 29 15:56:22.547008 dockerd[1676]: time="2025-01-29T15:56:22.546949887Z" level=info msg="API listen on /run/docker.sock" Jan 29 15:56:22.547130 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 15:56:23.046866 containerd[1473]: time="2025-01-29T15:56:23.046801369Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 15:56:23.889833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297946933.mount: Deactivated successfully. Jan 29 15:56:24.979656 containerd[1473]: time="2025-01-29T15:56:24.979565735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:24.979996 containerd[1473]: time="2025-01-29T15:56:24.979907390Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=26220950" Jan 29 15:56:24.980946 containerd[1473]: time="2025-01-29T15:56:24.980892253Z" level=info msg="ImageCreate event name:\"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:24.984437 containerd[1473]: time="2025-01-29T15:56:24.984404566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:24.986286 containerd[1473]: time="2025-01-29T15:56:24.986254703Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"26217748\" in 1.939403036s" Jan 29 15:56:24.986517 containerd[1473]: time="2025-01-29T15:56:24.986379530Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:265c2dedf28ab9b88c7910c1643e210ad62483867f2bab88f56919a6e49a0d19\"" Jan 29 15:56:24.987043 containerd[1473]: time="2025-01-29T15:56:24.987019339Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 15:56:26.128841 containerd[1473]: time="2025-01-29T15:56:26.128791395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:26.130938 containerd[1473]: time="2025-01-29T15:56:26.130889762Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=22527109" Jan 29 15:56:26.131786 containerd[1473]: time="2025-01-29T15:56:26.131759296Z" level=info msg="ImageCreate event name:\"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:26.134826 containerd[1473]: time="2025-01-29T15:56:26.134760584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:26.135815 containerd[1473]: time="2025-01-29T15:56:26.135777860Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"23968433\" in 1.148726534s" Jan 29 15:56:26.135815 containerd[1473]: time="2025-01-29T15:56:26.135812686Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:2933761aa7adae93679cdde1c0bf457bd4dc4b53f95fc066a4c50aa9c375ea13\"" Jan 29 15:56:26.136355 containerd[1473]: time="2025-01-29T15:56:26.136197453Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 15:56:27.277569 containerd[1473]: time="2025-01-29T15:56:27.277514730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:27.278477 containerd[1473]: time="2025-01-29T15:56:27.278202705Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=17481115" Jan 29 15:56:27.279606 containerd[1473]: time="2025-01-29T15:56:27.279211477Z" level=info msg="ImageCreate event name:\"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:27.282206 containerd[1473]: time="2025-01-29T15:56:27.282147027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:27.283382 containerd[1473]: time="2025-01-29T15:56:27.283354722Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"18922457\" in 1.147128241s" Jan 29 15:56:27.283437 containerd[1473]: time="2025-01-29T15:56:27.283388709Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:ddb38cac617cb18802e09e448db4b3aa70e9e469b02defa76e6de7192847a71c\"" Jan 29 15:56:27.283820 containerd[1473]: time="2025-01-29T15:56:27.283801191Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 15:56:27.324798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 15:56:27.333759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 15:56:27.426196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 15:56:27.429245 (kubelet)[1946]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 15:56:27.463373 kubelet[1946]: E0129 15:56:27.463316 1946 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 15:56:27.466125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 15:56:27.466275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 15:56:27.466775 systemd[1]: kubelet.service: Consumed 129ms CPU time, 102.9M memory peak. Jan 29 15:56:28.510762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815631848.mount: Deactivated successfully. Jan 29 15:56:28.870335 containerd[1473]: time="2025-01-29T15:56:28.870197370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:28.870926 containerd[1473]: time="2025-01-29T15:56:28.870885834Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364399" Jan 29 15:56:28.871759 containerd[1473]: time="2025-01-29T15:56:28.871734517Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:28.873670 containerd[1473]: time="2025-01-29T15:56:28.873640927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:28.874389 containerd[1473]: time="2025-01-29T15:56:28.874350062Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.590520922s" Jan 29 15:56:28.874389 containerd[1473]: time="2025-01-29T15:56:28.874383210Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 29 15:56:28.874895 containerd[1473]: time="2025-01-29T15:56:28.874859672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 15:56:29.825692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975258318.mount: Deactivated successfully. Jan 29 15:56:30.578033 containerd[1473]: time="2025-01-29T15:56:30.577791603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:30.578832 containerd[1473]: time="2025-01-29T15:56:30.578797731Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jan 29 15:56:30.579505 containerd[1473]: time="2025-01-29T15:56:30.579473774Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:30.582947 containerd[1473]: time="2025-01-29T15:56:30.582909292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:30.584371 containerd[1473]: time="2025-01-29T15:56:30.584241706Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.709352045s" Jan 29 15:56:30.585458 containerd[1473]: time="2025-01-29T15:56:30.585341161Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 29 15:56:30.585918 containerd[1473]: time="2025-01-29T15:56:30.585896647Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 15:56:30.962379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386414462.mount: Deactivated successfully. Jan 29 15:56:30.966901 containerd[1473]: time="2025-01-29T15:56:30.966852831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:30.967340 containerd[1473]: time="2025-01-29T15:56:30.967283000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jan 29 15:56:30.968045 containerd[1473]: time="2025-01-29T15:56:30.968008227Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:30.970241 containerd[1473]: time="2025-01-29T15:56:30.970202779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:30.971144 containerd[1473]: time="2025-01-29T15:56:30.971106063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 385.175148ms" Jan 29 15:56:30.971144 containerd[1473]: time="2025-01-29T15:56:30.971140651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 29 15:56:30.971592 containerd[1473]: time="2025-01-29T15:56:30.971553106Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 15:56:31.595751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3292708912.mount: Deactivated successfully. Jan 29 15:56:33.352343 containerd[1473]: time="2025-01-29T15:56:33.352284320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:33.356842 containerd[1473]: time="2025-01-29T15:56:33.356795845Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431" Jan 29 15:56:33.358053 containerd[1473]: time="2025-01-29T15:56:33.358023055Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:33.362969 containerd[1473]: time="2025-01-29T15:56:33.362923736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:33.364359 containerd[1473]: time="2025-01-29T15:56:33.364207527Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.392622232s" Jan 29 15:56:33.364359 containerd[1473]: time="2025-01-29T15:56:33.364247515Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 29 15:56:37.716668 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 15:56:37.724770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 15:56:37.812471 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 15:56:37.815753 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 15:56:37.849577 kubelet[2104]: E0129 15:56:37.849505 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 15:56:37.851992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 15:56:37.852146 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 15:56:37.852413 systemd[1]: kubelet.service: Consumed 122ms CPU time, 101.1M memory peak. Jan 29 15:56:37.987996 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 15:56:37.988135 systemd[1]: kubelet.service: Consumed 122ms CPU time, 101.1M memory peak. Jan 29 15:56:38.002845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 15:56:38.027849 systemd[1]: Reload requested from client PID 2120 ('systemctl') (unit session-7.scope)... Jan 29 15:56:38.027870 systemd[1]: Reloading... Jan 29 15:56:38.102611 zram_generator::config[2164]: No configuration found. Jan 29 15:56:38.303011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 15:56:38.375213 systemd[1]: Reloading finished in 347 ms. Jan 29 15:56:38.414929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 15:56:38.417358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 15:56:38.418747 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 15:56:38.418968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 15:56:38.419011 systemd[1]: kubelet.service: Consumed 80ms CPU time, 90.2M memory peak. Jan 29 15:56:38.420573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 15:56:38.519942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 15:56:38.523351 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 15:56:38.556071 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:56:38.556071 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 15:56:38.556071 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:56:38.556554 kubelet[2211]: I0129 15:56:38.556426 2211 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 15:56:39.449618 kubelet[2211]: I0129 15:56:39.448259 2211 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 15:56:39.449618 kubelet[2211]: I0129 15:56:39.448290 2211 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 15:56:39.449618 kubelet[2211]: I0129 15:56:39.448538 2211 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 15:56:39.492381 kubelet[2211]: E0129 15:56:39.492331 2211 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:56:39.496190 kubelet[2211]: I0129 15:56:39.496149 2211 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 15:56:39.504129 kubelet[2211]: E0129 15:56:39.504085 2211 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 15:56:39.504129 kubelet[2211]: I0129 15:56:39.504124 2211 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 15:56:39.506787 kubelet[2211]: I0129 15:56:39.506771 2211 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 15:56:39.507610 kubelet[2211]: I0129 15:56:39.507555 2211 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 15:56:39.507785 kubelet[2211]: I0129 15:56:39.507608 2211 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 15:56:39.507868 kubelet[2211]: I0129 15:56:39.507843 2211 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 15:56:39.507868 kubelet[2211]: I0129 15:56:39.507852 2211 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 15:56:39.508065 kubelet[2211]: I0129 15:56:39.508042 2211 state_mem.go:36] "Initialized new in-memory state store" Jan 29 15:56:39.512200 kubelet[2211]: I0129 15:56:39.512170 2211 kubelet.go:446] "Attempting to sync node with API server" Jan 29 15:56:39.512200 kubelet[2211]: I0129 15:56:39.512196 2211 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 15:56:39.513562 kubelet[2211]: I0129 15:56:39.512215 2211 kubelet.go:352] "Adding apiserver pod source" Jan 29 15:56:39.513562 kubelet[2211]: I0129 15:56:39.512224 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 15:56:39.519865 kubelet[2211]: I0129 15:56:39.515702 2211 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 15:56:39.519865 kubelet[2211]: I0129 15:56:39.516530 2211 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 15:56:39.519865 kubelet[2211]: W0129 15:56:39.516777 2211 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 15:56:39.519865 kubelet[2211]: I0129 15:56:39.518048 2211 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 15:56:39.519865 kubelet[2211]: I0129 15:56:39.518075 2211 server.go:1287] "Started kubelet" Jan 29 15:56:39.519865 kubelet[2211]: I0129 15:56:39.519072 2211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 15:56:39.519865 kubelet[2211]: I0129 15:56:39.519340 2211 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 15:56:39.519865 kubelet[2211]: W0129 15:56:39.519723 2211 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Jan 29 15:56:39.519865 kubelet[2211]: W0129 15:56:39.519732 2211 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Jan 29 15:56:39.519865 kubelet[2211]: E0129 15:56:39.519766 2211 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:56:39.520124 kubelet[2211]: E0129 15:56:39.519791 2211 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:56:39.520124 kubelet[2211]: I0129 15:56:39.519800 2211 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 15:56:39.524230 kubelet[2211]: I0129 15:56:39.523894 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 15:56:39.524640 kubelet[2211]: I0129 15:56:39.524617 2211 server.go:490] "Adding debug handlers to kubelet server" Jan 29 15:56:39.524935 kubelet[2211]: I0129 15:56:39.524911 2211 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 15:56:39.525251 kubelet[2211]: I0129 15:56:39.525223 2211 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 15:56:39.526018 kubelet[2211]: E0129 15:56:39.525399 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:39.526018 kubelet[2211]: I0129 15:56:39.525717 2211 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 15:56:39.526018 kubelet[2211]: I0129 15:56:39.525786 2211 reconciler.go:26] "Reconciler: start to sync state" Jan 29 15:56:39.526124 kubelet[2211]: E0129 15:56:39.526059 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Jan 29 15:56:39.526955 kubelet[2211]: W0129 15:56:39.526903 2211 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Jan 29 15:56:39.527030 kubelet[2211]: E0129 15:56:39.526969 2211 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:56:39.527363 kubelet[2211]: I0129 15:56:39.527330 2211 factory.go:221] Registration of the systemd container factory successfully Jan 29 15:56:39.527458 kubelet[2211]: I0129 15:56:39.527434 2211 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 15:56:39.528358 kubelet[2211]: E0129 15:56:39.528120 2211 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f34f302dd5701 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 15:56:39.518058241 +0000 UTC m=+0.991788825,LastTimestamp:2025-01-29 15:56:39.518058241 +0000 UTC m=+0.991788825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 15:56:39.528479 kubelet[2211]: I0129 15:56:39.528435 2211 factory.go:221] Registration of the containerd container factory successfully Jan 29 15:56:39.529721 kubelet[2211]: E0129 15:56:39.529378 2211 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 15:56:39.539994 kubelet[2211]: I0129 15:56:39.539977 2211 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 15:56:39.540285 kubelet[2211]: I0129 15:56:39.540076 2211 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 15:56:39.540285 kubelet[2211]: I0129 15:56:39.540095 2211 state_mem.go:36] "Initialized new in-memory state store" Jan 29 15:56:39.540921 kubelet[2211]: I0129 15:56:39.540891 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 15:56:39.543049 kubelet[2211]: I0129 15:56:39.541853 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 15:56:39.543049 kubelet[2211]: I0129 15:56:39.541877 2211 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 15:56:39.543049 kubelet[2211]: I0129 15:56:39.541893 2211 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 15:56:39.543049 kubelet[2211]: I0129 15:56:39.541899 2211 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 15:56:39.543049 kubelet[2211]: E0129 15:56:39.541933 2211 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 15:56:39.544285 kubelet[2211]: W0129 15:56:39.544236 2211 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Jan 29 15:56:39.544378 kubelet[2211]: E0129 15:56:39.544285 2211 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:56:39.612838 kubelet[2211]: I0129 15:56:39.612793 2211 policy_none.go:49] "None policy: Start" Jan 29 15:56:39.612838 kubelet[2211]: I0129 15:56:39.612828 2211 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 15:56:39.612838 kubelet[2211]: I0129 15:56:39.612841 2211 state_mem.go:35] "Initializing new in-memory state store" Jan 29 15:56:39.619280 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 15:56:39.625856 kubelet[2211]: E0129 15:56:39.625820 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:39.637237 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 15:56:39.640543 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 15:56:39.642167 kubelet[2211]: E0129 15:56:39.642130 2211 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 15:56:39.653221 kubelet[2211]: I0129 15:56:39.653201 2211 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 15:56:39.653677 kubelet[2211]: I0129 15:56:39.653469 2211 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 15:56:39.653677 kubelet[2211]: I0129 15:56:39.653489 2211 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 15:56:39.653837 kubelet[2211]: I0129 15:56:39.653816 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 15:56:39.654919 kubelet[2211]: E0129 15:56:39.654891 2211 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 15:56:39.654984 kubelet[2211]: E0129 15:56:39.654930 2211 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 15:56:39.727140 kubelet[2211]: E0129 15:56:39.727002 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Jan 29 15:56:39.757235 kubelet[2211]: I0129 15:56:39.757206 2211 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 15:56:39.757682 kubelet[2211]: E0129 15:56:39.757644 2211 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Jan 29 15:56:39.850302 systemd[1]: Created slice kubepods-burstable-pod32d599829b4843a6ada1c14d516caf69.slice - libcontainer container kubepods-burstable-pod32d599829b4843a6ada1c14d516caf69.slice. Jan 29 15:56:39.874875 kubelet[2211]: E0129 15:56:39.874841 2211 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 15:56:39.877723 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 29 15:56:39.892629 kubelet[2211]: E0129 15:56:39.892599 2211 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 15:56:39.893986 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 29 15:56:39.895378 kubelet[2211]: E0129 15:56:39.895339 2211 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 15:56:39.928742 kubelet[2211]: I0129 15:56:39.928712 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32d599829b4843a6ada1c14d516caf69-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32d599829b4843a6ada1c14d516caf69\") " pod="kube-system/kube-apiserver-localhost" Jan 29 15:56:39.928742 kubelet[2211]: I0129 15:56:39.928748 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32d599829b4843a6ada1c14d516caf69-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32d599829b4843a6ada1c14d516caf69\") " pod="kube-system/kube-apiserver-localhost" Jan 29 15:56:39.928850 kubelet[2211]: I0129 15:56:39.928772 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:39.928850 kubelet[2211]: I0129 15:56:39.928790 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:39.928850 kubelet[2211]: I0129 15:56:39.928806 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 15:56:39.928850 kubelet[2211]: I0129 15:56:39.928825 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32d599829b4843a6ada1c14d516caf69-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32d599829b4843a6ada1c14d516caf69\") " pod="kube-system/kube-apiserver-localhost" Jan 29 15:56:39.928850 kubelet[2211]: I0129 15:56:39.928840 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:39.928950 kubelet[2211]: I0129 15:56:39.928854 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:39.928950 kubelet[2211]: I0129 15:56:39.928869 2211 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:39.959674 kubelet[2211]: I0129 15:56:39.959652 2211 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 15:56:39.959966 kubelet[2211]: E0129 15:56:39.959931 2211 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Jan 29 15:56:40.127670 kubelet[2211]: E0129 15:56:40.127557 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Jan 29 15:56:40.175941 kubelet[2211]: E0129 15:56:40.175910 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:40.176760 containerd[1473]: time="2025-01-29T15:56:40.176664946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32d599829b4843a6ada1c14d516caf69,Namespace:kube-system,Attempt:0,}" Jan 29 15:56:40.193551 kubelet[2211]: E0129 15:56:40.193485 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:40.194052 containerd[1473]: time="2025-01-29T15:56:40.194007689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 29 15:56:40.196308 kubelet[2211]: E0129 15:56:40.196280 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:40.196708 containerd[1473]: time="2025-01-29T15:56:40.196677369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 29 15:56:40.361743 kubelet[2211]: I0129 15:56:40.361707 2211 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 15:56:40.362059 kubelet[2211]: E0129 15:56:40.362027 2211 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Jan 29 15:56:40.387679 kubelet[2211]: W0129 15:56:40.387577 2211 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Jan 29 15:56:40.387679 kubelet[2211]: E0129 15:56:40.387631 2211 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:56:40.581203 kubelet[2211]: W0129 15:56:40.581139 2211 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Jan 29 15:56:40.581203 kubelet[2211]: E0129 15:56:40.581201 2211 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:56:40.589844 kubelet[2211]: W0129 15:56:40.589798 2211 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Jan 29 15:56:40.589844 kubelet[2211]: E0129 15:56:40.589838 2211 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:56:40.691232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount351508750.mount: Deactivated successfully. Jan 29 15:56:40.696763 containerd[1473]: time="2025-01-29T15:56:40.696722251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 15:56:40.698120 containerd[1473]: time="2025-01-29T15:56:40.698057551Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 15:56:40.699204 containerd[1473]: time="2025-01-29T15:56:40.699160390Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 15:56:40.699827 containerd[1473]: time="2025-01-29T15:56:40.699801866Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 15:56:40.700476 containerd[1473]: time="2025-01-29T15:56:40.700431786Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 15:56:40.702292 containerd[1473]: time="2025-01-29T15:56:40.702226609Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 15:56:40.703089 containerd[1473]: time="2025-01-29T15:56:40.702959462Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 15:56:40.703708 containerd[1473]: time="2025-01-29T15:56:40.703675680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 15:56:40.706895 containerd[1473]: time="2025-01-29T15:56:40.706676275Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 512.588008ms" Jan 29 15:56:40.708030 containerd[1473]: time="2025-01-29T15:56:40.707990581Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.243109ms" Jan 29 15:56:40.708552 containerd[1473]: time="2025-01-29T15:56:40.708522645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 531.773961ms" Jan 29 15:56:40.847392 containerd[1473]: time="2025-01-29T15:56:40.847288502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:56:40.847392 containerd[1473]: time="2025-01-29T15:56:40.847368242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:56:40.847657 containerd[1473]: time="2025-01-29T15:56:40.847384598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:56:40.848246 containerd[1473]: time="2025-01-29T15:56:40.848163000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:56:40.848246 containerd[1473]: time="2025-01-29T15:56:40.848004800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:56:40.848246 containerd[1473]: time="2025-01-29T15:56:40.848058706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:56:40.848246 containerd[1473]: time="2025-01-29T15:56:40.848089218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:56:40.848246 containerd[1473]: time="2025-01-29T15:56:40.848168558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:56:40.848379 containerd[1473]: time="2025-01-29T15:56:40.848274971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:56:40.848609 containerd[1473]: time="2025-01-29T15:56:40.848216826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:56:40.848829 containerd[1473]: time="2025-01-29T15:56:40.848785121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:56:40.849522 containerd[1473]: time="2025-01-29T15:56:40.848943881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:56:40.869738 systemd[1]: Started cri-containerd-4545361f9a3e69c292b87d0d7aab8b92e6bb13c5f6a0a0e632ada63e17f73ad8.scope - libcontainer container 4545361f9a3e69c292b87d0d7aab8b92e6bb13c5f6a0a0e632ada63e17f73ad8. Jan 29 15:56:40.874235 systemd[1]: Started cri-containerd-c796175b3064533d7da3c6ae1c6c86e2ddb0644a1146b1eb6da5bc91efc993b5.scope - libcontainer container c796175b3064533d7da3c6ae1c6c86e2ddb0644a1146b1eb6da5bc91efc993b5. Jan 29 15:56:40.875193 systemd[1]: Started cri-containerd-eef356a3903134b98c7b9504de0447da8ce3394095e1dcd30aa93e3be3deb1e5.scope - libcontainer container eef356a3903134b98c7b9504de0447da8ce3394095e1dcd30aa93e3be3deb1e5. Jan 29 15:56:40.902728 containerd[1473]: time="2025-01-29T15:56:40.902579660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32d599829b4843a6ada1c14d516caf69,Namespace:kube-system,Attempt:0,} returns sandbox id \"4545361f9a3e69c292b87d0d7aab8b92e6bb13c5f6a0a0e632ada63e17f73ad8\"" Jan 29 15:56:40.903966 kubelet[2211]: E0129 15:56:40.903909 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:40.906785 containerd[1473]: time="2025-01-29T15:56:40.906713167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"eef356a3903134b98c7b9504de0447da8ce3394095e1dcd30aa93e3be3deb1e5\"" Jan 29 15:56:40.907820 kubelet[2211]: E0129 15:56:40.907675 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:40.909032 containerd[1473]: time="2025-01-29T15:56:40.908999625Z" level=info msg="CreateContainer within sandbox \"4545361f9a3e69c292b87d0d7aab8b92e6bb13c5f6a0a0e632ada63e17f73ad8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 15:56:40.910049 containerd[1473]: time="2025-01-29T15:56:40.910025004Z" level=info msg="CreateContainer within sandbox \"eef356a3903134b98c7b9504de0447da8ce3394095e1dcd30aa93e3be3deb1e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 15:56:40.910285 containerd[1473]: time="2025-01-29T15:56:40.910247147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"c796175b3064533d7da3c6ae1c6c86e2ddb0644a1146b1eb6da5bc91efc993b5\"" Jan 29 15:56:40.910902 kubelet[2211]: E0129 15:56:40.910884 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:40.912708 containerd[1473]: time="2025-01-29T15:56:40.912681407Z" level=info msg="CreateContainer within sandbox \"c796175b3064533d7da3c6ae1c6c86e2ddb0644a1146b1eb6da5bc91efc993b5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 15:56:40.928410 kubelet[2211]: E0129 15:56:40.928348 2211 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Jan 29 15:56:40.931350 containerd[1473]: time="2025-01-29T15:56:40.931278791Z" level=info msg="CreateContainer within sandbox \"eef356a3903134b98c7b9504de0447da8ce3394095e1dcd30aa93e3be3deb1e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0881cd09412bae79c07057eb47769531439be6e48cdbbd9abe7e840c0c71aa45\"" Jan 29 15:56:40.931934 containerd[1473]: time="2025-01-29T15:56:40.931912109Z" level=info msg="StartContainer for \"0881cd09412bae79c07057eb47769531439be6e48cdbbd9abe7e840c0c71aa45\"" Jan 29 15:56:40.932401 containerd[1473]: time="2025-01-29T15:56:40.932321565Z" level=info msg="CreateContainer within sandbox \"4545361f9a3e69c292b87d0d7aab8b92e6bb13c5f6a0a0e632ada63e17f73ad8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3fdfad7b2bcf487700d0839dc3e751bdb8287dac8587c6375732525001940832\"" Jan 29 15:56:40.932663 containerd[1473]: time="2025-01-29T15:56:40.932641643Z" level=info msg="StartContainer for \"3fdfad7b2bcf487700d0839dc3e751bdb8287dac8587c6375732525001940832\"" Jan 29 15:56:40.934100 containerd[1473]: time="2025-01-29T15:56:40.934068120Z" level=info msg="CreateContainer within sandbox \"c796175b3064533d7da3c6ae1c6c86e2ddb0644a1146b1eb6da5bc91efc993b5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"73e09d6d440eba6ee8c94ca2e26f438a4cc8a073a6cf8bd28cb590ce39173fc4\"" Jan 29 15:56:40.934641 containerd[1473]: time="2025-01-29T15:56:40.934490533Z" level=info msg="StartContainer for \"73e09d6d440eba6ee8c94ca2e26f438a4cc8a073a6cf8bd28cb590ce39173fc4\"" Jan 29 15:56:40.970751 systemd[1]: Started cri-containerd-0881cd09412bae79c07057eb47769531439be6e48cdbbd9abe7e840c0c71aa45.scope - libcontainer container 0881cd09412bae79c07057eb47769531439be6e48cdbbd9abe7e840c0c71aa45. Jan 29 15:56:40.972014 systemd[1]: Started cri-containerd-3fdfad7b2bcf487700d0839dc3e751bdb8287dac8587c6375732525001940832.scope - libcontainer container 3fdfad7b2bcf487700d0839dc3e751bdb8287dac8587c6375732525001940832. Jan 29 15:56:40.972859 systemd[1]: Started cri-containerd-73e09d6d440eba6ee8c94ca2e26f438a4cc8a073a6cf8bd28cb590ce39173fc4.scope - libcontainer container 73e09d6d440eba6ee8c94ca2e26f438a4cc8a073a6cf8bd28cb590ce39173fc4. Jan 29 15:56:40.998284 kubelet[2211]: W0129 15:56:40.997836 2211 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Jan 29 15:56:40.998284 kubelet[2211]: E0129 15:56:40.997902 2211 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Jan 29 15:56:41.009255 containerd[1473]: time="2025-01-29T15:56:41.006642876Z" level=info msg="StartContainer for \"0881cd09412bae79c07057eb47769531439be6e48cdbbd9abe7e840c0c71aa45\" returns successfully" Jan 29 15:56:41.029710 containerd[1473]: time="2025-01-29T15:56:41.027714197Z" level=info msg="StartContainer for \"73e09d6d440eba6ee8c94ca2e26f438a4cc8a073a6cf8bd28cb590ce39173fc4\" returns successfully" Jan 29 15:56:41.029710 containerd[1473]: time="2025-01-29T15:56:41.027783820Z" level=info msg="StartContainer for \"3fdfad7b2bcf487700d0839dc3e751bdb8287dac8587c6375732525001940832\" returns successfully" Jan 29 15:56:41.165004 kubelet[2211]: I0129 15:56:41.164974 2211 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 15:56:41.165343 kubelet[2211]: E0129 15:56:41.165317 2211 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Jan 29 15:56:41.552738 kubelet[2211]: E0129 15:56:41.552709 2211 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 15:56:41.553213 kubelet[2211]: E0129 15:56:41.552792 2211 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 15:56:41.553213 kubelet[2211]: E0129 15:56:41.553081 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:41.553521 kubelet[2211]: E0129 15:56:41.553261 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:41.556900 kubelet[2211]: E0129 15:56:41.556881 2211 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 15:56:41.557014 kubelet[2211]: E0129 15:56:41.556995 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:42.558145 kubelet[2211]: E0129 15:56:42.558108 2211 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 15:56:42.558456 kubelet[2211]: E0129 15:56:42.558237 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:42.558484 kubelet[2211]: E0129 15:56:42.558460 2211 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 15:56:42.561655 kubelet[2211]: E0129 15:56:42.558551 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:42.712506 kubelet[2211]: E0129 15:56:42.712472 2211 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 15:56:42.767045 kubelet[2211]: I0129 15:56:42.767009 2211 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 15:56:42.776846 kubelet[2211]: I0129 15:56:42.776804 2211 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 15:56:42.776846 kubelet[2211]: E0129 15:56:42.776844 2211 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 15:56:42.779967 kubelet[2211]: E0129 15:56:42.779932 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:42.880611 kubelet[2211]: E0129 15:56:42.880485 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:42.981096 kubelet[2211]: E0129 15:56:42.981062 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:43.082097 kubelet[2211]: E0129 15:56:43.082058 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:43.182647 kubelet[2211]: E0129 15:56:43.182522 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:43.283117 kubelet[2211]: E0129 15:56:43.283069 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:43.383640 kubelet[2211]: E0129 15:56:43.383602 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:43.484806 kubelet[2211]: E0129 15:56:43.484676 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:43.585775 kubelet[2211]: E0129 15:56:43.585732 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:43.686731 kubelet[2211]: E0129 15:56:43.686674 2211 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:43.825858 kubelet[2211]: I0129 15:56:43.825718 2211 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 15:56:43.836849 kubelet[2211]: I0129 15:56:43.836809 2211 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 15:56:43.841207 kubelet[2211]: I0129 15:56:43.841181 2211 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:44.517846 kubelet[2211]: I0129 15:56:44.517793 2211 apiserver.go:52] "Watching apiserver" Jan 29 15:56:44.520469 kubelet[2211]: E0129 15:56:44.520353 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:44.520469 kubelet[2211]: E0129 15:56:44.520412 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:44.520645 kubelet[2211]: E0129 15:56:44.520488 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:44.526032 kubelet[2211]: I0129 15:56:44.526015 2211 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 15:56:44.721318 kubelet[2211]: E0129 15:56:44.721287 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:44.983616 systemd[1]: Reload requested from client PID 2486 ('systemctl') (unit session-7.scope)... Jan 29 15:56:44.983655 systemd[1]: Reloading... Jan 29 15:56:45.053617 zram_generator::config[2536]: No configuration found. Jan 29 15:56:45.126286 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 15:56:45.207927 systemd[1]: Reloading finished in 224 ms. Jan 29 15:56:45.227159 kubelet[2211]: I0129 15:56:45.227096 2211 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 15:56:45.227372 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 15:56:45.238534 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 15:56:45.238830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 15:56:45.238882 systemd[1]: kubelet.service: Consumed 1.391s CPU time, 128.1M memory peak. Jan 29 15:56:45.248974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 15:56:45.346110 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 15:56:45.351097 (kubelet)[2572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 15:56:45.389450 kubelet[2572]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:56:45.389450 kubelet[2572]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 15:56:45.389450 kubelet[2572]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 15:56:45.389776 kubelet[2572]: I0129 15:56:45.389503 2572 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 15:56:45.396615 kubelet[2572]: I0129 15:56:45.395803 2572 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 15:56:45.396615 kubelet[2572]: I0129 15:56:45.395834 2572 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 15:56:45.396615 kubelet[2572]: I0129 15:56:45.396070 2572 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 15:56:45.397303 kubelet[2572]: I0129 15:56:45.397272 2572 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 15:56:45.399572 kubelet[2572]: I0129 15:56:45.399546 2572 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 15:56:45.402210 kubelet[2572]: E0129 15:56:45.402180 2572 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 15:56:45.402210 kubelet[2572]: I0129 15:56:45.402209 2572 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 15:56:45.405686 kubelet[2572]: I0129 15:56:45.405200 2572 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 15:56:45.405686 kubelet[2572]: I0129 15:56:45.405400 2572 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 15:56:45.405797 kubelet[2572]: I0129 15:56:45.405424 2572 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 15:56:45.405862 kubelet[2572]: I0129 15:56:45.405805 2572 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 15:56:45.405862 kubelet[2572]: I0129 15:56:45.405816 2572 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 15:56:45.405901 kubelet[2572]: I0129 15:56:45.405868 2572 state_mem.go:36] "Initialized new in-memory state store" Jan 29 15:56:45.406172 kubelet[2572]: I0129 15:56:45.406160 2572 kubelet.go:446] "Attempting to sync node with API server" Jan 29 15:56:45.406201 kubelet[2572]: I0129 15:56:45.406184 2572 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 15:56:45.406230 kubelet[2572]: I0129 15:56:45.406202 2572 kubelet.go:352] "Adding apiserver pod source" Jan 29 15:56:45.406230 kubelet[2572]: I0129 15:56:45.406211 2572 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 15:56:45.408607 kubelet[2572]: I0129 15:56:45.406812 2572 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 15:56:45.408607 kubelet[2572]: I0129 15:56:45.407819 2572 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 15:56:45.408607 kubelet[2572]: I0129 15:56:45.408216 2572 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 15:56:45.408607 kubelet[2572]: I0129 15:56:45.408238 2572 server.go:1287] "Started kubelet" Jan 29 15:56:45.408754 kubelet[2572]: I0129 15:56:45.408713 2572 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 15:56:45.408995 kubelet[2572]: I0129 15:56:45.408969 2572 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 15:56:45.409048 kubelet[2572]: I0129 15:56:45.409027 2572 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 15:56:45.410163 kubelet[2572]: I0129 15:56:45.410135 2572 server.go:490] "Adding debug handlers to kubelet server" Jan 29 15:56:45.411463 kubelet[2572]: E0129 15:56:45.411363 2572 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 15:56:45.411526 kubelet[2572]: I0129 15:56:45.411487 2572 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 15:56:45.411611 kubelet[2572]: I0129 15:56:45.411582 2572 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 15:56:45.412373 kubelet[2572]: I0129 15:56:45.412346 2572 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 15:56:45.412448 kubelet[2572]: I0129 15:56:45.412432 2572 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 15:56:45.413529 kubelet[2572]: I0129 15:56:45.413506 2572 factory.go:221] Registration of the containerd container factory successfully Jan 29 15:56:45.413529 kubelet[2572]: I0129 15:56:45.413529 2572 factory.go:221] Registration of the systemd container factory successfully Jan 29 15:56:45.413643 kubelet[2572]: I0129 15:56:45.413621 2572 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 15:56:45.415815 kubelet[2572]: I0129 15:56:45.415694 2572 reconciler.go:26] "Reconciler: start to sync state" Jan 29 15:56:45.426414 kubelet[2572]: I0129 15:56:45.426364 2572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 15:56:45.427722 kubelet[2572]: E0129 15:56:45.427696 2572 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 15:56:45.432577 kubelet[2572]: I0129 15:56:45.432552 2572 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 15:56:45.433633 kubelet[2572]: I0129 15:56:45.432696 2572 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 15:56:45.433633 kubelet[2572]: I0129 15:56:45.432720 2572 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 15:56:45.433633 kubelet[2572]: I0129 15:56:45.432728 2572 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 15:56:45.433633 kubelet[2572]: E0129 15:56:45.432766 2572 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 15:56:45.460576 kubelet[2572]: I0129 15:56:45.460547 2572 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 15:56:45.460576 kubelet[2572]: I0129 15:56:45.460570 2572 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 15:56:45.460711 kubelet[2572]: I0129 15:56:45.460614 2572 state_mem.go:36] "Initialized new in-memory state store" Jan 29 15:56:45.460785 kubelet[2572]: I0129 15:56:45.460766 2572 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 15:56:45.460810 kubelet[2572]: I0129 15:56:45.460785 2572 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 15:56:45.460810 kubelet[2572]: I0129 15:56:45.460803 2572 policy_none.go:49] "None policy: Start" Jan 29 15:56:45.460847 kubelet[2572]: I0129 15:56:45.460811 2572 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 15:56:45.460847 kubelet[2572]: I0129 15:56:45.460820 2572 state_mem.go:35] "Initializing new in-memory state store" Jan 29 15:56:45.460918 kubelet[2572]: I0129 15:56:45.460908 2572 state_mem.go:75] "Updated machine memory state" Jan 29 15:56:45.464540 kubelet[2572]: I0129 15:56:45.464520 2572 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 15:56:45.464877 kubelet[2572]: I0129 15:56:45.464689 2572 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 15:56:45.464877 kubelet[2572]: I0129 15:56:45.464709 2572 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 15:56:45.464969 kubelet[2572]: I0129 15:56:45.464880 2572 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 15:56:45.465773 kubelet[2572]: E0129 15:56:45.465744 2572 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 15:56:45.533560 kubelet[2572]: I0129 15:56:45.533446 2572 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:45.533560 kubelet[2572]: I0129 15:56:45.533511 2572 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 15:56:45.533831 kubelet[2572]: I0129 15:56:45.533790 2572 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 15:56:45.539867 kubelet[2572]: E0129 15:56:45.539834 2572 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:45.539867 kubelet[2572]: E0129 15:56:45.539854 2572 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 15:56:45.539955 kubelet[2572]: E0129 15:56:45.539920 2572 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 15:56:45.568283 kubelet[2572]: I0129 15:56:45.568251 2572 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 15:56:45.574005 kubelet[2572]: I0129 15:56:45.573977 2572 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 29 15:56:45.574087 kubelet[2572]: I0129 15:56:45.574072 2572 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 15:56:45.617569 kubelet[2572]: I0129 15:56:45.617529 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:45.617569 kubelet[2572]: I0129 15:56:45.617568 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:45.617703 kubelet[2572]: I0129 15:56:45.617604 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:45.617703 kubelet[2572]: I0129 15:56:45.617624 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32d599829b4843a6ada1c14d516caf69-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32d599829b4843a6ada1c14d516caf69\") " pod="kube-system/kube-apiserver-localhost" Jan 29 15:56:45.617703 kubelet[2572]: I0129 15:56:45.617641 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32d599829b4843a6ada1c14d516caf69-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32d599829b4843a6ada1c14d516caf69\") " pod="kube-system/kube-apiserver-localhost" Jan 29 15:56:45.617703 kubelet[2572]: I0129 15:56:45.617656 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32d599829b4843a6ada1c14d516caf69-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32d599829b4843a6ada1c14d516caf69\") " pod="kube-system/kube-apiserver-localhost" Jan 29 15:56:45.617703 kubelet[2572]: I0129 15:56:45.617678 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:45.617808 kubelet[2572]: I0129 15:56:45.617693 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 15:56:45.617808 kubelet[2572]: I0129 15:56:45.617710 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 15:56:45.840870 kubelet[2572]: E0129 15:56:45.840686 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:45.840870 kubelet[2572]: E0129 15:56:45.840752 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:45.840870 kubelet[2572]: E0129 15:56:45.840690 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:46.407158 kubelet[2572]: I0129 15:56:46.407118 2572 apiserver.go:52] "Watching apiserver" Jan 29 15:56:46.412758 kubelet[2572]: I0129 15:56:46.412713 2572 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 15:56:46.447308 kubelet[2572]: I0129 15:56:46.447229 2572 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 15:56:46.447694 kubelet[2572]: E0129 15:56:46.447432 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:46.447694 kubelet[2572]: E0129 15:56:46.447546 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:46.452380 kubelet[2572]: E0129 15:56:46.452346 2572 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 15:56:46.452711 kubelet[2572]: E0129 15:56:46.452500 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:46.472120 kubelet[2572]: I0129 15:56:46.472062 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.47204587 podStartE2EDuration="3.47204587s" podCreationTimestamp="2025-01-29 15:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 15:56:46.465786868 +0000 UTC m=+1.110291128" watchObservedRunningTime="2025-01-29 15:56:46.47204587 +0000 UTC m=+1.116550130" Jan 29 15:56:46.478742 kubelet[2572]: I0129 15:56:46.478688 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.478675154 podStartE2EDuration="3.478675154s" podCreationTimestamp="2025-01-29 15:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 15:56:46.472263744 +0000 UTC m=+1.116768004" watchObservedRunningTime="2025-01-29 15:56:46.478675154 +0000 UTC m=+1.123179414" Jan 29 15:56:46.487033 kubelet[2572]: I0129 15:56:46.486988 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.486975087 podStartE2EDuration="3.486975087s" podCreationTimestamp="2025-01-29 15:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 15:56:46.479239156 +0000 UTC m=+1.123743416" watchObservedRunningTime="2025-01-29 15:56:46.486975087 +0000 UTC m=+1.131479347" Jan 29 15:56:47.450285 kubelet[2572]: E0129 15:56:47.450214 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:47.455549 kubelet[2572]: E0129 15:56:47.451561 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:50.325631 sudo[1656]: pam_unix(sudo:session): session closed for user root Jan 29 15:56:50.326980 sshd[1655]: Connection closed by 10.0.0.1 port 50386 Jan 29 15:56:50.327319 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Jan 29 15:56:50.330346 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Jan 29 15:56:50.330644 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:50386.service: Deactivated successfully. Jan 29 15:56:50.332326 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 15:56:50.332518 systemd[1]: session-7.scope: Consumed 6.576s CPU time, 222M memory peak. Jan 29 15:56:50.333649 systemd-logind[1458]: Removed session 7. Jan 29 15:56:51.740067 kubelet[2572]: I0129 15:56:51.739931 2572 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 15:56:51.746335 containerd[1473]: time="2025-01-29T15:56:51.746254328Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 15:56:51.746869 kubelet[2572]: I0129 15:56:51.746453 2572 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 15:56:51.907740 kubelet[2572]: E0129 15:56:51.907684 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:52.030008 kubelet[2572]: E0129 15:56:52.029900 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:52.423890 systemd[1]: Created slice kubepods-besteffort-poda455b6f6_37eb_4c42_908a_56fa6b576271.slice - libcontainer container kubepods-besteffort-poda455b6f6_37eb_4c42_908a_56fa6b576271.slice. Jan 29 15:56:52.458231 kubelet[2572]: E0129 15:56:52.458176 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:52.458933 kubelet[2572]: E0129 15:56:52.458370 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:52.464753 kubelet[2572]: I0129 15:56:52.464709 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a455b6f6-37eb-4c42-908a-56fa6b576271-lib-modules\") pod \"kube-proxy-2c284\" (UID: \"a455b6f6-37eb-4c42-908a-56fa6b576271\") " pod="kube-system/kube-proxy-2c284" Jan 29 15:56:52.464887 kubelet[2572]: I0129 15:56:52.464764 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c826p\" (UniqueName: \"kubernetes.io/projected/a455b6f6-37eb-4c42-908a-56fa6b576271-kube-api-access-c826p\") pod \"kube-proxy-2c284\" (UID: \"a455b6f6-37eb-4c42-908a-56fa6b576271\") " pod="kube-system/kube-proxy-2c284" Jan 29 15:56:52.464887 kubelet[2572]: I0129 15:56:52.464790 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a455b6f6-37eb-4c42-908a-56fa6b576271-kube-proxy\") pod \"kube-proxy-2c284\" (UID: \"a455b6f6-37eb-4c42-908a-56fa6b576271\") " pod="kube-system/kube-proxy-2c284" Jan 29 15:56:52.464887 kubelet[2572]: I0129 15:56:52.464804 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a455b6f6-37eb-4c42-908a-56fa6b576271-xtables-lock\") pod \"kube-proxy-2c284\" (UID: \"a455b6f6-37eb-4c42-908a-56fa6b576271\") " pod="kube-system/kube-proxy-2c284" Jan 29 15:56:52.579340 kubelet[2572]: E0129 15:56:52.579042 2572 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 15:56:52.579340 kubelet[2572]: E0129 15:56:52.579073 2572 projected.go:194] Error preparing data for projected volume kube-api-access-c826p for pod kube-system/kube-proxy-2c284: configmap "kube-root-ca.crt" not found Jan 29 15:56:52.579340 kubelet[2572]: E0129 15:56:52.579129 2572 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a455b6f6-37eb-4c42-908a-56fa6b576271-kube-api-access-c826p podName:a455b6f6-37eb-4c42-908a-56fa6b576271 nodeName:}" failed. No retries permitted until 2025-01-29 15:56:53.079108032 +0000 UTC m=+7.723612292 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c826p" (UniqueName: "kubernetes.io/projected/a455b6f6-37eb-4c42-908a-56fa6b576271-kube-api-access-c826p") pod "kube-proxy-2c284" (UID: "a455b6f6-37eb-4c42-908a-56fa6b576271") : configmap "kube-root-ca.crt" not found Jan 29 15:56:52.839570 systemd[1]: Created slice kubepods-besteffort-pod177583a0_d61e_49c8_8271_a9c78fb8f444.slice - libcontainer container kubepods-besteffort-pod177583a0_d61e_49c8_8271_a9c78fb8f444.slice. Jan 29 15:56:52.866833 kubelet[2572]: I0129 15:56:52.866746 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/177583a0-d61e-49c8-8271-a9c78fb8f444-var-lib-calico\") pod \"tigera-operator-7d68577dc5-9mqn6\" (UID: \"177583a0-d61e-49c8-8271-a9c78fb8f444\") " pod="tigera-operator/tigera-operator-7d68577dc5-9mqn6" Jan 29 15:56:52.867115 kubelet[2572]: I0129 15:56:52.866839 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r69sq\" (UniqueName: \"kubernetes.io/projected/177583a0-d61e-49c8-8271-a9c78fb8f444-kube-api-access-r69sq\") pod \"tigera-operator-7d68577dc5-9mqn6\" (UID: \"177583a0-d61e-49c8-8271-a9c78fb8f444\") " pod="tigera-operator/tigera-operator-7d68577dc5-9mqn6" Jan 29 15:56:53.143293 containerd[1473]: time="2025-01-29T15:56:53.143180185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-9mqn6,Uid:177583a0-d61e-49c8-8271-a9c78fb8f444,Namespace:tigera-operator,Attempt:0,}" Jan 29 15:56:53.176324 containerd[1473]: time="2025-01-29T15:56:53.176219376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:56:53.176324 containerd[1473]: time="2025-01-29T15:56:53.176271607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:56:53.176324 containerd[1473]: time="2025-01-29T15:56:53.176281805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:56:53.176656 containerd[1473]: time="2025-01-29T15:56:53.176350794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:56:53.193773 systemd[1]: Started cri-containerd-4329447b711cb8119a0c5d73965e0a90b7a5bf2ad6409e969f00c90ab8fffbed.scope - libcontainer container 4329447b711cb8119a0c5d73965e0a90b7a5bf2ad6409e969f00c90ab8fffbed. Jan 29 15:56:53.219699 containerd[1473]: time="2025-01-29T15:56:53.219632658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-9mqn6,Uid:177583a0-d61e-49c8-8271-a9c78fb8f444,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4329447b711cb8119a0c5d73965e0a90b7a5bf2ad6409e969f00c90ab8fffbed\"" Jan 29 15:56:53.221449 containerd[1473]: time="2025-01-29T15:56:53.221403480Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 15:56:53.334570 kubelet[2572]: E0129 15:56:53.334527 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:53.335045 containerd[1473]: time="2025-01-29T15:56:53.335011531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2c284,Uid:a455b6f6-37eb-4c42-908a-56fa6b576271,Namespace:kube-system,Attempt:0,}" Jan 29 15:56:53.373015 containerd[1473]: time="2025-01-29T15:56:53.372920062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:56:53.373015 containerd[1473]: time="2025-01-29T15:56:53.372970493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:56:53.373015 containerd[1473]: time="2025-01-29T15:56:53.372981291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:56:53.373248 containerd[1473]: time="2025-01-29T15:56:53.373109150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:56:53.396736 systemd[1]: Started cri-containerd-95671372bd39af45a608d77399033a00f2c8b0d01de2e2dd9d79f31794822b9e.scope - libcontainer container 95671372bd39af45a608d77399033a00f2c8b0d01de2e2dd9d79f31794822b9e. Jan 29 15:56:53.415658 containerd[1473]: time="2025-01-29T15:56:53.415460211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2c284,Uid:a455b6f6-37eb-4c42-908a-56fa6b576271,Namespace:kube-system,Attempt:0,} returns sandbox id \"95671372bd39af45a608d77399033a00f2c8b0d01de2e2dd9d79f31794822b9e\"" Jan 29 15:56:53.416136 kubelet[2572]: E0129 15:56:53.416112 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:53.418738 containerd[1473]: time="2025-01-29T15:56:53.418635996Z" level=info msg="CreateContainer within sandbox \"95671372bd39af45a608d77399033a00f2c8b0d01de2e2dd9d79f31794822b9e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 15:56:53.431845 containerd[1473]: time="2025-01-29T15:56:53.431788819Z" level=info msg="CreateContainer within sandbox \"95671372bd39af45a608d77399033a00f2c8b0d01de2e2dd9d79f31794822b9e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8bb90e864b25569acc38cbb556e99253ec00cdc58c3b1bfbf5d2fcca1b3ff8f\"" Jan 29 15:56:53.438936 containerd[1473]: time="2025-01-29T15:56:53.438908019Z" level=info msg="StartContainer for \"e8bb90e864b25569acc38cbb556e99253ec00cdc58c3b1bfbf5d2fcca1b3ff8f\"" Jan 29 15:56:53.465804 systemd[1]: Started cri-containerd-e8bb90e864b25569acc38cbb556e99253ec00cdc58c3b1bfbf5d2fcca1b3ff8f.scope - libcontainer container e8bb90e864b25569acc38cbb556e99253ec00cdc58c3b1bfbf5d2fcca1b3ff8f. Jan 29 15:56:53.493991 containerd[1473]: time="2025-01-29T15:56:53.492271585Z" level=info msg="StartContainer for \"e8bb90e864b25569acc38cbb556e99253ec00cdc58c3b1bfbf5d2fcca1b3ff8f\" returns successfully" Jan 29 15:56:54.305856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount672104277.mount: Deactivated successfully. Jan 29 15:56:54.464655 kubelet[2572]: E0129 15:56:54.464511 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:54.472724 kubelet[2572]: I0129 15:56:54.472675 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2c284" podStartSLOduration=2.472657781 podStartE2EDuration="2.472657781s" podCreationTimestamp="2025-01-29 15:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 15:56:54.472646543 +0000 UTC m=+9.117150803" watchObservedRunningTime="2025-01-29 15:56:54.472657781 +0000 UTC m=+9.117162041" Jan 29 15:56:55.465544 kubelet[2572]: E0129 15:56:55.465376 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:55.481904 kubelet[2572]: E0129 15:56:55.481607 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:55.902032 containerd[1473]: time="2025-01-29T15:56:55.901979153Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:55.902677 containerd[1473]: time="2025-01-29T15:56:55.902626411Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 15:56:55.903471 containerd[1473]: time="2025-01-29T15:56:55.903426204Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:55.905495 containerd[1473]: time="2025-01-29T15:56:55.905464602Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:56:55.906764 containerd[1473]: time="2025-01-29T15:56:55.906735521Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.685293807s" Jan 29 15:56:55.906822 containerd[1473]: time="2025-01-29T15:56:55.906768555Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 15:56:55.911641 containerd[1473]: time="2025-01-29T15:56:55.911612309Z" level=info msg="CreateContainer within sandbox \"4329447b711cb8119a0c5d73965e0a90b7a5bf2ad6409e969f00c90ab8fffbed\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 15:56:55.920807 containerd[1473]: time="2025-01-29T15:56:55.920755743Z" level=info msg="CreateContainer within sandbox \"4329447b711cb8119a0c5d73965e0a90b7a5bf2ad6409e969f00c90ab8fffbed\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"88079acfa04fcccc5ab05c87d7e1650544b3a34dfcb3240edccdcbe568abbac1\"" Jan 29 15:56:55.921223 containerd[1473]: time="2025-01-29T15:56:55.921198673Z" level=info msg="StartContainer for \"88079acfa04fcccc5ab05c87d7e1650544b3a34dfcb3240edccdcbe568abbac1\"" Jan 29 15:56:55.948740 systemd[1]: Started cri-containerd-88079acfa04fcccc5ab05c87d7e1650544b3a34dfcb3240edccdcbe568abbac1.scope - libcontainer container 88079acfa04fcccc5ab05c87d7e1650544b3a34dfcb3240edccdcbe568abbac1. Jan 29 15:56:55.974732 containerd[1473]: time="2025-01-29T15:56:55.974684373Z" level=info msg="StartContainer for \"88079acfa04fcccc5ab05c87d7e1650544b3a34dfcb3240edccdcbe568abbac1\" returns successfully" Jan 29 15:56:56.471394 kubelet[2572]: E0129 15:56:56.471349 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:56.488757 kubelet[2572]: I0129 15:56:56.488646 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-9mqn6" podStartSLOduration=1.799382067 podStartE2EDuration="4.488628484s" podCreationTimestamp="2025-01-29 15:56:52 +0000 UTC" firstStartedPulling="2025-01-29 15:56:53.220970833 +0000 UTC m=+7.865475093" lastFinishedPulling="2025-01-29 15:56:55.91021729 +0000 UTC m=+10.554721510" observedRunningTime="2025-01-29 15:56:56.479928457 +0000 UTC m=+11.124432717" watchObservedRunningTime="2025-01-29 15:56:56.488628484 +0000 UTC m=+11.133132744" Jan 29 15:56:57.473050 kubelet[2572]: E0129 15:56:57.473023 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:56:59.138679 update_engine[1460]: I20250129 15:56:59.138610 1460 update_attempter.cc:509] Updating boot flags... Jan 29 15:56:59.206621 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2966) Jan 29 15:56:59.324672 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2967) Jan 29 15:56:59.347413 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2967) Jan 29 15:56:59.937425 systemd[1]: Created slice kubepods-besteffort-pod05d7f57c_f89f_4df9_9825_c3b4adfbd250.slice - libcontainer container kubepods-besteffort-pod05d7f57c_f89f_4df9_9825_c3b4adfbd250.slice. Jan 29 15:56:59.979973 systemd[1]: Created slice kubepods-besteffort-pod95515102_4080_434a_94b1_2911c595ee41.slice - libcontainer container kubepods-besteffort-pod95515102_4080_434a_94b1_2911c595ee41.slice. Jan 29 15:57:00.024868 kubelet[2572]: I0129 15:57:00.024791 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwhws\" (UniqueName: \"kubernetes.io/projected/05d7f57c-f89f-4df9-9825-c3b4adfbd250-kube-api-access-pwhws\") pod \"calico-typha-545f5b9d69-vwtjg\" (UID: \"05d7f57c-f89f-4df9-9825-c3b4adfbd250\") " pod="calico-system/calico-typha-545f5b9d69-vwtjg" Jan 29 15:57:00.024868 kubelet[2572]: I0129 15:57:00.024842 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95515102-4080-434a-94b1-2911c595ee41-lib-modules\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.025344 kubelet[2572]: I0129 15:57:00.024883 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95515102-4080-434a-94b1-2911c595ee41-tigera-ca-bundle\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.025344 kubelet[2572]: I0129 15:57:00.024900 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95515102-4080-434a-94b1-2911c595ee41-xtables-lock\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.025344 kubelet[2572]: I0129 15:57:00.024916 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/95515102-4080-434a-94b1-2911c595ee41-var-run-calico\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.025344 kubelet[2572]: I0129 15:57:00.024932 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k9b2\" (UniqueName: \"kubernetes.io/projected/95515102-4080-434a-94b1-2911c595ee41-kube-api-access-7k9b2\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.025344 kubelet[2572]: I0129 15:57:00.024949 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/05d7f57c-f89f-4df9-9825-c3b4adfbd250-typha-certs\") pod \"calico-typha-545f5b9d69-vwtjg\" (UID: \"05d7f57c-f89f-4df9-9825-c3b4adfbd250\") " pod="calico-system/calico-typha-545f5b9d69-vwtjg" Jan 29 15:57:00.025490 kubelet[2572]: I0129 15:57:00.024967 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/95515102-4080-434a-94b1-2911c595ee41-node-certs\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.025490 kubelet[2572]: I0129 15:57:00.024983 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95515102-4080-434a-94b1-2911c595ee41-var-lib-calico\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.025490 kubelet[2572]: I0129 15:57:00.025016 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05d7f57c-f89f-4df9-9825-c3b4adfbd250-tigera-ca-bundle\") pod \"calico-typha-545f5b9d69-vwtjg\" (UID: \"05d7f57c-f89f-4df9-9825-c3b4adfbd250\") " pod="calico-system/calico-typha-545f5b9d69-vwtjg" Jan 29 15:57:00.025490 kubelet[2572]: I0129 15:57:00.025034 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/95515102-4080-434a-94b1-2911c595ee41-policysync\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.025490 kubelet[2572]: I0129 15:57:00.025052 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/95515102-4080-434a-94b1-2911c595ee41-cni-bin-dir\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.025626 kubelet[2572]: I0129 15:57:00.025067 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/95515102-4080-434a-94b1-2911c595ee41-cni-log-dir\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.025626 kubelet[2572]: I0129 15:57:00.025085 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/95515102-4080-434a-94b1-2911c595ee41-cni-net-dir\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.025626 kubelet[2572]: I0129 15:57:00.025104 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/95515102-4080-434a-94b1-2911c595ee41-flexvol-driver-host\") pod \"calico-node-pfntn\" (UID: \"95515102-4080-434a-94b1-2911c595ee41\") " pod="calico-system/calico-node-pfntn" Jan 29 15:57:00.080607 kubelet[2572]: E0129 15:57:00.080553 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dqj97" podUID="8d0ca05b-3272-4e38-9a00-746f382615ae" Jan 29 15:57:00.125699 kubelet[2572]: I0129 15:57:00.125655 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8d0ca05b-3272-4e38-9a00-746f382615ae-varrun\") pod \"csi-node-driver-dqj97\" (UID: \"8d0ca05b-3272-4e38-9a00-746f382615ae\") " pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:00.125826 kubelet[2572]: I0129 15:57:00.125753 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8d0ca05b-3272-4e38-9a00-746f382615ae-socket-dir\") pod \"csi-node-driver-dqj97\" (UID: \"8d0ca05b-3272-4e38-9a00-746f382615ae\") " pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:00.125826 kubelet[2572]: I0129 15:57:00.125775 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8d0ca05b-3272-4e38-9a00-746f382615ae-registration-dir\") pod \"csi-node-driver-dqj97\" (UID: \"8d0ca05b-3272-4e38-9a00-746f382615ae\") " pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:00.125826 kubelet[2572]: I0129 15:57:00.125822 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d0ca05b-3272-4e38-9a00-746f382615ae-kubelet-dir\") pod \"csi-node-driver-dqj97\" (UID: \"8d0ca05b-3272-4e38-9a00-746f382615ae\") " pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:00.125901 kubelet[2572]: I0129 15:57:00.125840 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7nn\" (UniqueName: \"kubernetes.io/projected/8d0ca05b-3272-4e38-9a00-746f382615ae-kube-api-access-8q7nn\") pod \"csi-node-driver-dqj97\" (UID: \"8d0ca05b-3272-4e38-9a00-746f382615ae\") " pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:00.138630 kubelet[2572]: E0129 15:57:00.134377 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.138630 kubelet[2572]: W0129 15:57:00.134417 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.138630 kubelet[2572]: E0129 15:57:00.134447 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.146521 kubelet[2572]: E0129 15:57:00.139190 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.146521 kubelet[2572]: W0129 15:57:00.140667 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.146521 kubelet[2572]: E0129 15:57:00.140695 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.146521 kubelet[2572]: E0129 15:57:00.140913 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.146521 kubelet[2572]: W0129 15:57:00.140922 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.146521 kubelet[2572]: E0129 15:57:00.140936 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.146521 kubelet[2572]: E0129 15:57:00.141123 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.146521 kubelet[2572]: W0129 15:57:00.141133 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.146521 kubelet[2572]: E0129 15:57:00.141147 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.146521 kubelet[2572]: E0129 15:57:00.141347 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.146843 kubelet[2572]: W0129 15:57:00.141362 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.146843 kubelet[2572]: E0129 15:57:00.141380 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.146843 kubelet[2572]: E0129 15:57:00.141561 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.146843 kubelet[2572]: W0129 15:57:00.141572 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.146843 kubelet[2572]: E0129 15:57:00.141581 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.146843 kubelet[2572]: E0129 15:57:00.141771 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.146843 kubelet[2572]: W0129 15:57:00.141779 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.146843 kubelet[2572]: E0129 15:57:00.141799 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.146843 kubelet[2572]: E0129 15:57:00.142001 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.146843 kubelet[2572]: W0129 15:57:00.142010 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.147033 kubelet[2572]: E0129 15:57:00.142071 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.147033 kubelet[2572]: E0129 15:57:00.143840 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.147033 kubelet[2572]: W0129 15:57:00.143855 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.147033 kubelet[2572]: E0129 15:57:00.143902 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.147033 kubelet[2572]: E0129 15:57:00.144052 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.147033 kubelet[2572]: W0129 15:57:00.144060 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.147033 kubelet[2572]: E0129 15:57:00.144092 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.147033 kubelet[2572]: E0129 15:57:00.146515 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.147033 kubelet[2572]: W0129 15:57:00.146529 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.147033 kubelet[2572]: E0129 15:57:00.146638 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.147216 kubelet[2572]: E0129 15:57:00.146716 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.147216 kubelet[2572]: W0129 15:57:00.146724 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.147216 kubelet[2572]: E0129 15:57:00.146781 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.147216 kubelet[2572]: E0129 15:57:00.146946 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.147216 kubelet[2572]: W0129 15:57:00.146955 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.147389 kubelet[2572]: E0129 15:57:00.147362 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.147389 kubelet[2572]: W0129 15:57:00.147381 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.151672 kubelet[2572]: E0129 15:57:00.149161 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.151672 kubelet[2572]: E0129 15:57:00.149224 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.152649 kubelet[2572]: E0129 15:57:00.152466 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.152649 kubelet[2572]: W0129 15:57:00.152484 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.152649 kubelet[2572]: E0129 15:57:00.152563 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.157081 kubelet[2572]: E0129 15:57:00.153798 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.157081 kubelet[2572]: W0129 15:57:00.153815 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.157081 kubelet[2572]: E0129 15:57:00.153868 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.157081 kubelet[2572]: E0129 15:57:00.155507 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.157081 kubelet[2572]: W0129 15:57:00.155520 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.157081 kubelet[2572]: E0129 15:57:00.155577 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.157081 kubelet[2572]: E0129 15:57:00.155901 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.157081 kubelet[2572]: W0129 15:57:00.155911 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.157430 kubelet[2572]: E0129 15:57:00.157391 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.157681 kubelet[2572]: E0129 15:57:00.157660 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.157681 kubelet[2572]: W0129 15:57:00.157675 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.158593 kubelet[2572]: E0129 15:57:00.157750 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.158593 kubelet[2572]: E0129 15:57:00.157888 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.158593 kubelet[2572]: W0129 15:57:00.157897 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.158593 kubelet[2572]: E0129 15:57:00.157952 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.158780 kubelet[2572]: E0129 15:57:00.158746 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.158780 kubelet[2572]: W0129 15:57:00.158760 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.158860 kubelet[2572]: E0129 15:57:00.158829 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.160939 kubelet[2572]: E0129 15:57:00.160909 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.160939 kubelet[2572]: W0129 15:57:00.160930 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.161345 kubelet[2572]: E0129 15:57:00.161316 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.161534 kubelet[2572]: E0129 15:57:00.161514 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.161534 kubelet[2572]: W0129 15:57:00.161531 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.161626 kubelet[2572]: E0129 15:57:00.161563 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.161758 kubelet[2572]: E0129 15:57:00.161742 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.161758 kubelet[2572]: W0129 15:57:00.161754 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.161814 kubelet[2572]: E0129 15:57:00.161805 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.164738 kubelet[2572]: E0129 15:57:00.164710 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.164738 kubelet[2572]: W0129 15:57:00.164732 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.164807 kubelet[2572]: E0129 15:57:00.164773 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.165007 kubelet[2572]: E0129 15:57:00.164990 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.165007 kubelet[2572]: W0129 15:57:00.165001 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.165107 kubelet[2572]: E0129 15:57:00.165083 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.166332 kubelet[2572]: E0129 15:57:00.166315 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.166332 kubelet[2572]: W0129 15:57:00.166326 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.166401 kubelet[2572]: E0129 15:57:00.166359 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.166577 kubelet[2572]: E0129 15:57:00.166560 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.166577 kubelet[2572]: W0129 15:57:00.166572 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.166675 kubelet[2572]: E0129 15:57:00.166628 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.166834 kubelet[2572]: E0129 15:57:00.166817 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.166834 kubelet[2572]: W0129 15:57:00.166828 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.166893 kubelet[2572]: E0129 15:57:00.166854 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.167033 kubelet[2572]: E0129 15:57:00.167016 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.167033 kubelet[2572]: W0129 15:57:00.167027 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.167082 kubelet[2572]: E0129 15:57:00.167055 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.170306 kubelet[2572]: E0129 15:57:00.170273 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.170366 kubelet[2572]: W0129 15:57:00.170302 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.172766 kubelet[2572]: E0129 15:57:00.172732 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.172766 kubelet[2572]: W0129 15:57:00.172760 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.173801 kubelet[2572]: E0129 15:57:00.173772 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.173801 kubelet[2572]: W0129 15:57:00.173796 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.174021 kubelet[2572]: E0129 15:57:00.173997 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.174066 kubelet[2572]: E0129 15:57:00.174043 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.174140 kubelet[2572]: E0129 15:57:00.174119 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.176705 kubelet[2572]: E0129 15:57:00.176669 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.176705 kubelet[2572]: W0129 15:57:00.176696 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.177357 kubelet[2572]: E0129 15:57:00.177333 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.181657 kubelet[2572]: E0129 15:57:00.181628 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.181657 kubelet[2572]: W0129 15:57:00.181653 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.181997 kubelet[2572]: E0129 15:57:00.181949 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.184751 kubelet[2572]: E0129 15:57:00.184727 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.184861 kubelet[2572]: W0129 15:57:00.184748 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.184861 kubelet[2572]: E0129 15:57:00.184867 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.186611 kubelet[2572]: E0129 15:57:00.186569 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.186611 kubelet[2572]: W0129 15:57:00.186594 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.186611 kubelet[2572]: E0129 15:57:00.186609 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.227329 kubelet[2572]: E0129 15:57:00.227215 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.227329 kubelet[2572]: W0129 15:57:00.227241 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.227329 kubelet[2572]: E0129 15:57:00.227262 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.227539 kubelet[2572]: E0129 15:57:00.227520 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.227539 kubelet[2572]: W0129 15:57:00.227533 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.227628 kubelet[2572]: E0129 15:57:00.227544 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.228223 kubelet[2572]: E0129 15:57:00.228202 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.228223 kubelet[2572]: W0129 15:57:00.228220 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.228270 kubelet[2572]: E0129 15:57:00.228238 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.228740 kubelet[2572]: E0129 15:57:00.228721 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.228740 kubelet[2572]: W0129 15:57:00.228739 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.228799 kubelet[2572]: E0129 15:57:00.228757 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.229231 kubelet[2572]: E0129 15:57:00.229211 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.229231 kubelet[2572]: W0129 15:57:00.229227 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.229293 kubelet[2572]: E0129 15:57:00.229267 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.229496 kubelet[2572]: E0129 15:57:00.229478 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.229496 kubelet[2572]: W0129 15:57:00.229492 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.229769 kubelet[2572]: E0129 15:57:00.229736 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.229855 kubelet[2572]: E0129 15:57:00.229793 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.229880 kubelet[2572]: W0129 15:57:00.229859 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.230046 kubelet[2572]: E0129 15:57:00.230027 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.230815 kubelet[2572]: E0129 15:57:00.230795 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.230843 kubelet[2572]: W0129 15:57:00.230813 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.230886 kubelet[2572]: E0129 15:57:00.230865 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.231074 kubelet[2572]: E0129 15:57:00.231016 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.231102 kubelet[2572]: W0129 15:57:00.231095 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.231171 kubelet[2572]: E0129 15:57:00.231148 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.231338 kubelet[2572]: E0129 15:57:00.231321 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.231338 kubelet[2572]: W0129 15:57:00.231335 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.231384 kubelet[2572]: E0129 15:57:00.231366 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.231515 kubelet[2572]: E0129 15:57:00.231500 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.231545 kubelet[2572]: W0129 15:57:00.231511 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.232611 kubelet[2572]: E0129 15:57:00.231639 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.232611 kubelet[2572]: E0129 15:57:00.231761 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.232611 kubelet[2572]: W0129 15:57:00.231771 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.232611 kubelet[2572]: E0129 15:57:00.231813 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.232611 kubelet[2572]: E0129 15:57:00.231926 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.232611 kubelet[2572]: W0129 15:57:00.231934 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.232611 kubelet[2572]: E0129 15:57:00.231948 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.232611 kubelet[2572]: E0129 15:57:00.232086 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.232611 kubelet[2572]: W0129 15:57:00.232092 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.232611 kubelet[2572]: E0129 15:57:00.232100 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.232996 kubelet[2572]: E0129 15:57:00.232278 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.232996 kubelet[2572]: W0129 15:57:00.232285 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.232996 kubelet[2572]: E0129 15:57:00.232325 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.232996 kubelet[2572]: E0129 15:57:00.232446 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.232996 kubelet[2572]: W0129 15:57:00.232454 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.232996 kubelet[2572]: E0129 15:57:00.232481 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.232996 kubelet[2572]: E0129 15:57:00.232618 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.232996 kubelet[2572]: W0129 15:57:00.232627 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.232996 kubelet[2572]: E0129 15:57:00.232648 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.232996 kubelet[2572]: E0129 15:57:00.232790 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.233275 kubelet[2572]: W0129 15:57:00.232797 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.233275 kubelet[2572]: E0129 15:57:00.232811 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.233275 kubelet[2572]: E0129 15:57:00.232996 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.233275 kubelet[2572]: W0129 15:57:00.233003 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.233275 kubelet[2572]: E0129 15:57:00.233041 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.233275 kubelet[2572]: E0129 15:57:00.233129 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.233275 kubelet[2572]: W0129 15:57:00.233135 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.233275 kubelet[2572]: E0129 15:57:00.233253 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.233275 kubelet[2572]: W0129 15:57:00.233258 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.233275 kubelet[2572]: E0129 15:57:00.233266 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.233777 kubelet[2572]: E0129 15:57:00.233398 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.233777 kubelet[2572]: E0129 15:57:00.233421 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.233777 kubelet[2572]: W0129 15:57:00.233428 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.233777 kubelet[2572]: E0129 15:57:00.233436 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.233777 kubelet[2572]: E0129 15:57:00.233606 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.233777 kubelet[2572]: W0129 15:57:00.233617 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.233777 kubelet[2572]: E0129 15:57:00.233626 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.233914 kubelet[2572]: E0129 15:57:00.233803 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.233914 kubelet[2572]: W0129 15:57:00.233811 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.233914 kubelet[2572]: E0129 15:57:00.233819 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.234467 kubelet[2572]: E0129 15:57:00.234415 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.234467 kubelet[2572]: W0129 15:57:00.234430 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.234467 kubelet[2572]: E0129 15:57:00.234442 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.243959 kubelet[2572]: E0129 15:57:00.243938 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:00.244766 kubelet[2572]: W0129 15:57:00.244747 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:00.244828 kubelet[2572]: E0129 15:57:00.244815 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:00.248430 kubelet[2572]: E0129 15:57:00.248411 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:00.262202 containerd[1473]: time="2025-01-29T15:57:00.262157281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545f5b9d69-vwtjg,Uid:05d7f57c-f89f-4df9-9825-c3b4adfbd250,Namespace:calico-system,Attempt:0,}" Jan 29 15:57:00.283207 kubelet[2572]: E0129 15:57:00.283176 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:00.284299 containerd[1473]: time="2025-01-29T15:57:00.284039008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pfntn,Uid:95515102-4080-434a-94b1-2911c595ee41,Namespace:calico-system,Attempt:0,}" Jan 29 15:57:00.323737 containerd[1473]: time="2025-01-29T15:57:00.321909097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:57:00.323737 containerd[1473]: time="2025-01-29T15:57:00.321971768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:57:00.323737 containerd[1473]: time="2025-01-29T15:57:00.321982567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:00.323737 containerd[1473]: time="2025-01-29T15:57:00.322062476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:00.323737 containerd[1473]: time="2025-01-29T15:57:00.322517975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:57:00.323737 containerd[1473]: time="2025-01-29T15:57:00.322573007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:57:00.323737 containerd[1473]: time="2025-01-29T15:57:00.322626320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:00.323737 containerd[1473]: time="2025-01-29T15:57:00.322718548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:00.344789 systemd[1]: Started cri-containerd-68d1b1c98e11ab782e109c092538c6c4f94a8e1be49af04b59b5f133a7fdb0b1.scope - libcontainer container 68d1b1c98e11ab782e109c092538c6c4f94a8e1be49af04b59b5f133a7fdb0b1. Jan 29 15:57:00.347710 systemd[1]: Started cri-containerd-0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655.scope - libcontainer container 0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655. Jan 29 15:57:00.380017 containerd[1473]: time="2025-01-29T15:57:00.379964942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pfntn,Uid:95515102-4080-434a-94b1-2911c595ee41,Namespace:calico-system,Attempt:0,} returns sandbox id \"68d1b1c98e11ab782e109c092538c6c4f94a8e1be49af04b59b5f133a7fdb0b1\"" Jan 29 15:57:00.385861 containerd[1473]: time="2025-01-29T15:57:00.384049430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-545f5b9d69-vwtjg,Uid:05d7f57c-f89f-4df9-9825-c3b4adfbd250,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655\"" Jan 29 15:57:00.389742 kubelet[2572]: E0129 15:57:00.389425 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:00.389742 kubelet[2572]: E0129 15:57:00.389649 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:00.394013 containerd[1473]: time="2025-01-29T15:57:00.393977531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 15:57:01.433859 kubelet[2572]: E0129 15:57:01.433772 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dqj97" podUID="8d0ca05b-3272-4e38-9a00-746f382615ae" Jan 29 15:57:01.465362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1893892896.mount: Deactivated successfully. Jan 29 15:57:01.863838 containerd[1473]: time="2025-01-29T15:57:01.863752653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:01.864380 containerd[1473]: time="2025-01-29T15:57:01.864327778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 29 15:57:01.865168 containerd[1473]: time="2025-01-29T15:57:01.865130673Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:01.867642 containerd[1473]: time="2025-01-29T15:57:01.867609429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:01.868248 containerd[1473]: time="2025-01-29T15:57:01.868213030Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.474197424s" Jan 29 15:57:01.868282 containerd[1473]: time="2025-01-29T15:57:01.868245865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 15:57:01.869186 containerd[1473]: time="2025-01-29T15:57:01.869130910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 15:57:01.888242 containerd[1473]: time="2025-01-29T15:57:01.888207416Z" level=info msg="CreateContainer within sandbox \"0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 15:57:01.898858 containerd[1473]: time="2025-01-29T15:57:01.898773914Z" level=info msg="CreateContainer within sandbox \"0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\"" Jan 29 15:57:01.899458 containerd[1473]: time="2025-01-29T15:57:01.899294726Z" level=info msg="StartContainer for \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\"" Jan 29 15:57:01.926750 systemd[1]: Started cri-containerd-6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68.scope - libcontainer container 6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68. Jan 29 15:57:01.964403 containerd[1473]: time="2025-01-29T15:57:01.964358660Z" level=info msg="StartContainer for \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\" returns successfully" Jan 29 15:57:02.505404 kubelet[2572]: E0129 15:57:02.505291 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:02.523895 kubelet[2572]: E0129 15:57:02.523849 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.523895 kubelet[2572]: W0129 15:57:02.523900 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.524055 kubelet[2572]: E0129 15:57:02.523924 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.524263 kubelet[2572]: E0129 15:57:02.524196 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.524263 kubelet[2572]: W0129 15:57:02.524211 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.524263 kubelet[2572]: E0129 15:57:02.524253 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.524490 kubelet[2572]: E0129 15:57:02.524460 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.524490 kubelet[2572]: W0129 15:57:02.524474 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.524490 kubelet[2572]: E0129 15:57:02.524486 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.524711 kubelet[2572]: E0129 15:57:02.524693 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.524711 kubelet[2572]: W0129 15:57:02.524705 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.524770 kubelet[2572]: E0129 15:57:02.524717 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.524920 kubelet[2572]: E0129 15:57:02.524902 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.524920 kubelet[2572]: W0129 15:57:02.524913 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.524971 kubelet[2572]: E0129 15:57:02.524921 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.525283 kubelet[2572]: E0129 15:57:02.525268 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.525283 kubelet[2572]: W0129 15:57:02.525281 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.525351 kubelet[2572]: E0129 15:57:02.525290 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.525473 kubelet[2572]: E0129 15:57:02.525460 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.525473 kubelet[2572]: W0129 15:57:02.525472 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.525531 kubelet[2572]: E0129 15:57:02.525481 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.525673 kubelet[2572]: E0129 15:57:02.525663 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.525706 kubelet[2572]: W0129 15:57:02.525673 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.525706 kubelet[2572]: E0129 15:57:02.525682 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.525939 kubelet[2572]: E0129 15:57:02.525893 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.525939 kubelet[2572]: W0129 15:57:02.525905 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.525939 kubelet[2572]: E0129 15:57:02.525914 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.526116 kubelet[2572]: E0129 15:57:02.526096 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.526169 kubelet[2572]: W0129 15:57:02.526138 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.526169 kubelet[2572]: E0129 15:57:02.526150 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.526319 kubelet[2572]: E0129 15:57:02.526301 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.526356 kubelet[2572]: W0129 15:57:02.526329 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.526356 kubelet[2572]: E0129 15:57:02.526340 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.526526 kubelet[2572]: E0129 15:57:02.526514 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.526526 kubelet[2572]: W0129 15:57:02.526525 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.526581 kubelet[2572]: E0129 15:57:02.526536 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.526712 kubelet[2572]: E0129 15:57:02.526702 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.526740 kubelet[2572]: W0129 15:57:02.526711 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.526740 kubelet[2572]: E0129 15:57:02.526719 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.526863 kubelet[2572]: E0129 15:57:02.526854 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.526885 kubelet[2572]: W0129 15:57:02.526862 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.526885 kubelet[2572]: E0129 15:57:02.526870 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.526999 kubelet[2572]: E0129 15:57:02.526991 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.527076 kubelet[2572]: W0129 15:57:02.526999 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.527076 kubelet[2572]: E0129 15:57:02.527006 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.551516 kubelet[2572]: E0129 15:57:02.551479 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.551516 kubelet[2572]: W0129 15:57:02.551504 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.551516 kubelet[2572]: E0129 15:57:02.551525 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.551817 kubelet[2572]: E0129 15:57:02.551796 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.551817 kubelet[2572]: W0129 15:57:02.551810 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.551877 kubelet[2572]: E0129 15:57:02.551826 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.552107 kubelet[2572]: E0129 15:57:02.552092 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.552107 kubelet[2572]: W0129 15:57:02.552105 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.552188 kubelet[2572]: E0129 15:57:02.552120 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.552374 kubelet[2572]: E0129 15:57:02.552353 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.552374 kubelet[2572]: W0129 15:57:02.552365 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.552494 kubelet[2572]: E0129 15:57:02.552378 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.552544 kubelet[2572]: E0129 15:57:02.552532 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.552544 kubelet[2572]: W0129 15:57:02.552542 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.552608 kubelet[2572]: E0129 15:57:02.552554 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.552737 kubelet[2572]: E0129 15:57:02.552716 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.552737 kubelet[2572]: W0129 15:57:02.552726 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.552802 kubelet[2572]: E0129 15:57:02.552755 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.552914 kubelet[2572]: E0129 15:57:02.552898 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.552914 kubelet[2572]: W0129 15:57:02.552907 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.552973 kubelet[2572]: E0129 15:57:02.552929 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.553244 kubelet[2572]: E0129 15:57:02.553058 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.553244 kubelet[2572]: W0129 15:57:02.553075 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.553244 kubelet[2572]: E0129 15:57:02.553089 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.553370 kubelet[2572]: E0129 15:57:02.553251 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.553370 kubelet[2572]: W0129 15:57:02.553258 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.553370 kubelet[2572]: E0129 15:57:02.553276 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.553468 kubelet[2572]: E0129 15:57:02.553455 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.553468 kubelet[2572]: W0129 15:57:02.553464 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.553511 kubelet[2572]: E0129 15:57:02.553477 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.553690 kubelet[2572]: E0129 15:57:02.553676 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.553690 kubelet[2572]: W0129 15:57:02.553688 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.553777 kubelet[2572]: E0129 15:57:02.553701 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.553997 kubelet[2572]: E0129 15:57:02.553968 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.553997 kubelet[2572]: W0129 15:57:02.553984 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.554324 kubelet[2572]: E0129 15:57:02.554003 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.554324 kubelet[2572]: E0129 15:57:02.554243 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.554324 kubelet[2572]: W0129 15:57:02.554254 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.554324 kubelet[2572]: E0129 15:57:02.554268 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.554435 kubelet[2572]: E0129 15:57:02.554417 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.554435 kubelet[2572]: W0129 15:57:02.554426 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.554482 kubelet[2572]: E0129 15:57:02.554438 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.554666 kubelet[2572]: E0129 15:57:02.554649 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.554666 kubelet[2572]: W0129 15:57:02.554664 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.554742 kubelet[2572]: E0129 15:57:02.554683 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.554887 kubelet[2572]: E0129 15:57:02.554875 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.554887 kubelet[2572]: W0129 15:57:02.554887 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.554943 kubelet[2572]: E0129 15:57:02.554901 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.555183 kubelet[2572]: E0129 15:57:02.555152 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.555183 kubelet[2572]: W0129 15:57:02.555163 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.555355 kubelet[2572]: E0129 15:57:02.555185 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:02.555550 kubelet[2572]: E0129 15:57:02.555536 2572 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 15:57:02.555550 kubelet[2572]: W0129 15:57:02.555548 2572 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 15:57:02.555642 kubelet[2572]: E0129 15:57:02.555557 2572 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 15:57:03.021795 kubelet[2572]: I0129 15:57:03.021363 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-545f5b9d69-vwtjg" podStartSLOduration=2.543901808 podStartE2EDuration="4.021342678s" podCreationTimestamp="2025-01-29 15:56:59 +0000 UTC" firstStartedPulling="2025-01-29 15:57:00.391558177 +0000 UTC m=+15.036062437" lastFinishedPulling="2025-01-29 15:57:01.868999047 +0000 UTC m=+16.513503307" observedRunningTime="2025-01-29 15:57:02.524249839 +0000 UTC m=+17.168754139" watchObservedRunningTime="2025-01-29 15:57:03.021342678 +0000 UTC m=+17.665846938" Jan 29 15:57:03.055281 containerd[1473]: time="2025-01-29T15:57:03.055225481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:03.056197 containerd[1473]: time="2025-01-29T15:57:03.055958951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 29 15:57:03.059755 containerd[1473]: time="2025-01-29T15:57:03.059676815Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:03.061819 containerd[1473]: time="2025-01-29T15:57:03.061765958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:03.062390 containerd[1473]: time="2025-01-29T15:57:03.062356166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.193164624s" Jan 29 15:57:03.062440 containerd[1473]: time="2025-01-29T15:57:03.062390522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 15:57:03.066135 containerd[1473]: time="2025-01-29T15:57:03.065907890Z" level=info msg="CreateContainer within sandbox \"68d1b1c98e11ab782e109c092538c6c4f94a8e1be49af04b59b5f133a7fdb0b1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 15:57:03.130954 containerd[1473]: time="2025-01-29T15:57:03.130821846Z" level=info msg="CreateContainer within sandbox \"68d1b1c98e11ab782e109c092538c6c4f94a8e1be49af04b59b5f133a7fdb0b1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b6a18c65fb22c6ce50a59139f2d2877b9fc57e0ee92d616f1bf8e833ea89c7b1\"" Jan 29 15:57:03.131375 containerd[1473]: time="2025-01-29T15:57:03.131347901Z" level=info msg="StartContainer for \"b6a18c65fb22c6ce50a59139f2d2877b9fc57e0ee92d616f1bf8e833ea89c7b1\"" Jan 29 15:57:03.156741 systemd[1]: Started cri-containerd-b6a18c65fb22c6ce50a59139f2d2877b9fc57e0ee92d616f1bf8e833ea89c7b1.scope - libcontainer container b6a18c65fb22c6ce50a59139f2d2877b9fc57e0ee92d616f1bf8e833ea89c7b1. Jan 29 15:57:03.183092 containerd[1473]: time="2025-01-29T15:57:03.182187663Z" level=info msg="StartContainer for \"b6a18c65fb22c6ce50a59139f2d2877b9fc57e0ee92d616f1bf8e833ea89c7b1\" returns successfully" Jan 29 15:57:03.225400 systemd[1]: cri-containerd-b6a18c65fb22c6ce50a59139f2d2877b9fc57e0ee92d616f1bf8e833ea89c7b1.scope: Deactivated successfully. Jan 29 15:57:03.258222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6a18c65fb22c6ce50a59139f2d2877b9fc57e0ee92d616f1bf8e833ea89c7b1-rootfs.mount: Deactivated successfully. Jan 29 15:57:03.335854 containerd[1473]: time="2025-01-29T15:57:03.321236802Z" level=info msg="shim disconnected" id=b6a18c65fb22c6ce50a59139f2d2877b9fc57e0ee92d616f1bf8e833ea89c7b1 namespace=k8s.io Jan 29 15:57:03.335854 containerd[1473]: time="2025-01-29T15:57:03.335859928Z" level=warning msg="cleaning up after shim disconnected" id=b6a18c65fb22c6ce50a59139f2d2877b9fc57e0ee92d616f1bf8e833ea89c7b1 namespace=k8s.io Jan 29 15:57:03.336196 containerd[1473]: time="2025-01-29T15:57:03.335876046Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 15:57:03.435792 kubelet[2572]: E0129 15:57:03.435758 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dqj97" podUID="8d0ca05b-3272-4e38-9a00-746f382615ae" Jan 29 15:57:03.502574 kubelet[2572]: E0129 15:57:03.502348 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:03.503284 kubelet[2572]: E0129 15:57:03.503255 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:03.504350 containerd[1473]: time="2025-01-29T15:57:03.503770966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 15:57:04.504655 kubelet[2572]: E0129 15:57:04.504178 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:05.434139 kubelet[2572]: E0129 15:57:05.433848 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dqj97" podUID="8d0ca05b-3272-4e38-9a00-746f382615ae" Jan 29 15:57:06.851780 containerd[1473]: time="2025-01-29T15:57:06.851732649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:06.852707 containerd[1473]: time="2025-01-29T15:57:06.852479726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 15:57:06.853676 containerd[1473]: time="2025-01-29T15:57:06.853407182Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:06.860019 containerd[1473]: time="2025-01-29T15:57:06.859930255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:06.860664 containerd[1473]: time="2025-01-29T15:57:06.860627617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.356808136s" Jan 29 15:57:06.860664 containerd[1473]: time="2025-01-29T15:57:06.860656134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 15:57:06.863362 containerd[1473]: time="2025-01-29T15:57:06.863327716Z" level=info msg="CreateContainer within sandbox \"68d1b1c98e11ab782e109c092538c6c4f94a8e1be49af04b59b5f133a7fdb0b1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 15:57:06.874843 containerd[1473]: time="2025-01-29T15:57:06.874800956Z" level=info msg="CreateContainer within sandbox \"68d1b1c98e11ab782e109c092538c6c4f94a8e1be49af04b59b5f133a7fdb0b1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3c76a81b840878d2516163bc13f1545ea1f680fa08d14620a13e8b9bbc98e78c\"" Jan 29 15:57:06.875327 containerd[1473]: time="2025-01-29T15:57:06.875302540Z" level=info msg="StartContainer for \"3c76a81b840878d2516163bc13f1545ea1f680fa08d14620a13e8b9bbc98e78c\"" Jan 29 15:57:06.904741 systemd[1]: Started cri-containerd-3c76a81b840878d2516163bc13f1545ea1f680fa08d14620a13e8b9bbc98e78c.scope - libcontainer container 3c76a81b840878d2516163bc13f1545ea1f680fa08d14620a13e8b9bbc98e78c. Jan 29 15:57:06.928649 containerd[1473]: time="2025-01-29T15:57:06.928601755Z" level=info msg="StartContainer for \"3c76a81b840878d2516163bc13f1545ea1f680fa08d14620a13e8b9bbc98e78c\" returns successfully" Jan 29 15:57:07.433951 kubelet[2572]: E0129 15:57:07.433861 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dqj97" podUID="8d0ca05b-3272-4e38-9a00-746f382615ae" Jan 29 15:57:07.449550 containerd[1473]: time="2025-01-29T15:57:07.449509574Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 15:57:07.451661 systemd[1]: cri-containerd-3c76a81b840878d2516163bc13f1545ea1f680fa08d14620a13e8b9bbc98e78c.scope: Deactivated successfully. Jan 29 15:57:07.451928 systemd[1]: cri-containerd-3c76a81b840878d2516163bc13f1545ea1f680fa08d14620a13e8b9bbc98e78c.scope: Consumed 434ms CPU time, 155.8M memory peak, 4K read from disk, 147.4M written to disk. Jan 29 15:57:07.469195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c76a81b840878d2516163bc13f1545ea1f680fa08d14620a13e8b9bbc98e78c-rootfs.mount: Deactivated successfully. Jan 29 15:57:07.477397 kubelet[2572]: I0129 15:57:07.477353 2572 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 15:57:07.514107 kubelet[2572]: E0129 15:57:07.514072 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:07.571453 containerd[1473]: time="2025-01-29T15:57:07.571375845Z" level=info msg="shim disconnected" id=3c76a81b840878d2516163bc13f1545ea1f680fa08d14620a13e8b9bbc98e78c namespace=k8s.io Jan 29 15:57:07.571913 containerd[1473]: time="2025-01-29T15:57:07.571490232Z" level=warning msg="cleaning up after shim disconnected" id=3c76a81b840878d2516163bc13f1545ea1f680fa08d14620a13e8b9bbc98e78c namespace=k8s.io Jan 29 15:57:07.571913 containerd[1473]: time="2025-01-29T15:57:07.571683931Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 15:57:07.596088 systemd[1]: Created slice kubepods-burstable-podba02e7a6_9c5e_4aac_977f_d5a6845ef7ef.slice - libcontainer container kubepods-burstable-podba02e7a6_9c5e_4aac_977f_d5a6845ef7ef.slice. Jan 29 15:57:07.603879 systemd[1]: Created slice kubepods-burstable-pode13e233f_36bf_4ccc_9393_e6e06b49a20a.slice - libcontainer container kubepods-burstable-pode13e233f_36bf_4ccc_9393_e6e06b49a20a.slice. Jan 29 15:57:07.609971 systemd[1]: Created slice kubepods-besteffort-podeb0b4103_78c0_4eef_8691_75f802520548.slice - libcontainer container kubepods-besteffort-podeb0b4103_78c0_4eef_8691_75f802520548.slice. Jan 29 15:57:07.614180 systemd[1]: Created slice kubepods-besteffort-pod6b44de13_eff2_42e8_844c_3aa53fc7af03.slice - libcontainer container kubepods-besteffort-pod6b44de13_eff2_42e8_844c_3aa53fc7af03.slice. Jan 29 15:57:07.620196 systemd[1]: Created slice kubepods-besteffort-podcc63fcad_580a_4790_b92e_85d54cee6129.slice - libcontainer container kubepods-besteffort-podcc63fcad_580a_4790_b92e_85d54cee6129.slice. Jan 29 15:57:07.683289 kubelet[2572]: I0129 15:57:07.683233 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b44de13-eff2-42e8-844c-3aa53fc7af03-tigera-ca-bundle\") pod \"calico-kube-controllers-596fc4546b-ddndj\" (UID: \"6b44de13-eff2-42e8-844c-3aa53fc7af03\") " pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:07.683289 kubelet[2572]: I0129 15:57:07.683287 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cc63fcad-580a-4790-b92e-85d54cee6129-calico-apiserver-certs\") pod \"calico-apiserver-7fd9769987-jfw4v\" (UID: \"cc63fcad-580a-4790-b92e-85d54cee6129\") " pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:07.683474 kubelet[2572]: I0129 15:57:07.683313 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eb0b4103-78c0-4eef-8691-75f802520548-calico-apiserver-certs\") pod \"calico-apiserver-7fd9769987-pstm9\" (UID: \"eb0b4103-78c0-4eef-8691-75f802520548\") " pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:07.683474 kubelet[2572]: I0129 15:57:07.683332 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf572\" (UniqueName: \"kubernetes.io/projected/6b44de13-eff2-42e8-844c-3aa53fc7af03-kube-api-access-cf572\") pod \"calico-kube-controllers-596fc4546b-ddndj\" (UID: \"6b44de13-eff2-42e8-844c-3aa53fc7af03\") " pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:07.683474 kubelet[2572]: I0129 15:57:07.683349 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4s4f\" (UniqueName: \"kubernetes.io/projected/e13e233f-36bf-4ccc-9393-e6e06b49a20a-kube-api-access-g4s4f\") pod \"coredns-668d6bf9bc-x5c64\" (UID: \"e13e233f-36bf-4ccc-9393-e6e06b49a20a\") " pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:07.683474 kubelet[2572]: I0129 15:57:07.683374 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dzhd\" (UniqueName: \"kubernetes.io/projected/cc63fcad-580a-4790-b92e-85d54cee6129-kube-api-access-4dzhd\") pod \"calico-apiserver-7fd9769987-jfw4v\" (UID: \"cc63fcad-580a-4790-b92e-85d54cee6129\") " pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:07.683474 kubelet[2572]: I0129 15:57:07.683436 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jqvw\" (UniqueName: \"kubernetes.io/projected/ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef-kube-api-access-2jqvw\") pod \"coredns-668d6bf9bc-8bcmc\" (UID: \"ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef\") " pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:07.683604 kubelet[2572]: I0129 15:57:07.683475 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e13e233f-36bf-4ccc-9393-e6e06b49a20a-config-volume\") pod \"coredns-668d6bf9bc-x5c64\" (UID: \"e13e233f-36bf-4ccc-9393-e6e06b49a20a\") " pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:07.683604 kubelet[2572]: I0129 15:57:07.683496 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef-config-volume\") pod \"coredns-668d6bf9bc-8bcmc\" (UID: \"ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef\") " pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:07.683604 kubelet[2572]: I0129 15:57:07.683516 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljlr6\" (UniqueName: \"kubernetes.io/projected/eb0b4103-78c0-4eef-8691-75f802520548-kube-api-access-ljlr6\") pod \"calico-apiserver-7fd9769987-pstm9\" (UID: \"eb0b4103-78c0-4eef-8691-75f802520548\") " pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:07.901337 kubelet[2572]: E0129 15:57:07.901281 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:07.901949 containerd[1473]: time="2025-01-29T15:57:07.901900208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:0,}" Jan 29 15:57:07.908229 kubelet[2572]: E0129 15:57:07.907879 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:07.908493 containerd[1473]: time="2025-01-29T15:57:07.908454339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:0,}" Jan 29 15:57:07.915481 containerd[1473]: time="2025-01-29T15:57:07.915438865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:0,}" Jan 29 15:57:07.918123 containerd[1473]: time="2025-01-29T15:57:07.918087818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:0,}" Jan 29 15:57:07.923073 containerd[1473]: time="2025-01-29T15:57:07.923030844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:0,}" Jan 29 15:57:08.314673 containerd[1473]: time="2025-01-29T15:57:08.313838621Z" level=error msg="Failed to destroy network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.320268 containerd[1473]: time="2025-01-29T15:57:08.320104885Z" level=error msg="encountered an error cleaning up failed sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.320268 containerd[1473]: time="2025-01-29T15:57:08.320240591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.322630 kubelet[2572]: E0129 15:57:08.321287 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.324632 kubelet[2572]: E0129 15:57:08.324577 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:08.324911 kubelet[2572]: E0129 15:57:08.324791 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:08.325044 kubelet[2572]: E0129 15:57:08.324858 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" podUID="cc63fcad-580a-4790-b92e-85d54cee6129" Jan 29 15:57:08.331115 containerd[1473]: time="2025-01-29T15:57:08.331058138Z" level=error msg="Failed to destroy network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.331470 containerd[1473]: time="2025-01-29T15:57:08.331442538Z" level=error msg="encountered an error cleaning up failed sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.331535 containerd[1473]: time="2025-01-29T15:57:08.331514091Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.331755 kubelet[2572]: E0129 15:57:08.331726 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.331808 kubelet[2572]: E0129 15:57:08.331775 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:08.331808 kubelet[2572]: E0129 15:57:08.331792 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:08.331929 kubelet[2572]: E0129 15:57:08.331824 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bcmc" podUID="ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef" Jan 29 15:57:08.333479 containerd[1473]: time="2025-01-29T15:57:08.333443729Z" level=error msg="Failed to destroy network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.334438 containerd[1473]: time="2025-01-29T15:57:08.334219807Z" level=error msg="encountered an error cleaning up failed sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.334438 containerd[1473]: time="2025-01-29T15:57:08.334315717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.334698 kubelet[2572]: E0129 15:57:08.334613 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.334698 kubelet[2572]: E0129 15:57:08.334682 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:08.334698 kubelet[2572]: E0129 15:57:08.334707 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:08.335242 kubelet[2572]: E0129 15:57:08.334747 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x5c64" podUID="e13e233f-36bf-4ccc-9393-e6e06b49a20a" Jan 29 15:57:08.337303 containerd[1473]: time="2025-01-29T15:57:08.336475531Z" level=error msg="Failed to destroy network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.337975 containerd[1473]: time="2025-01-29T15:57:08.337672286Z" level=error msg="encountered an error cleaning up failed sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.337975 containerd[1473]: time="2025-01-29T15:57:08.337862066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.338782 kubelet[2572]: E0129 15:57:08.338307 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.338782 kubelet[2572]: E0129 15:57:08.338484 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:08.338782 kubelet[2572]: E0129 15:57:08.338502 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:08.338910 kubelet[2572]: E0129 15:57:08.338680 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" podUID="eb0b4103-78c0-4eef-8691-75f802520548" Jan 29 15:57:08.340148 containerd[1473]: time="2025-01-29T15:57:08.340113470Z" level=error msg="Failed to destroy network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.340790 containerd[1473]: time="2025-01-29T15:57:08.340422118Z" level=error msg="encountered an error cleaning up failed sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.340790 containerd[1473]: time="2025-01-29T15:57:08.340496070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.340905 kubelet[2572]: E0129 15:57:08.340683 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.340905 kubelet[2572]: E0129 15:57:08.340719 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:08.340905 kubelet[2572]: E0129 15:57:08.340737 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:08.341137 kubelet[2572]: E0129 15:57:08.341102 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" podUID="6b44de13-eff2-42e8-844c-3aa53fc7af03" Jan 29 15:57:08.516406 kubelet[2572]: I0129 15:57:08.516362 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14" Jan 29 15:57:08.518340 kubelet[2572]: I0129 15:57:08.517992 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc" Jan 29 15:57:08.518409 containerd[1473]: time="2025-01-29T15:57:08.516913922Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\"" Jan 29 15:57:08.518409 containerd[1473]: time="2025-01-29T15:57:08.518212226Z" level=info msg="Ensure that sandbox 8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14 in task-service has been cleanup successfully" Jan 29 15:57:08.518666 containerd[1473]: time="2025-01-29T15:57:08.518544991Z" level=info msg="TearDown network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" successfully" Jan 29 15:57:08.518666 containerd[1473]: time="2025-01-29T15:57:08.518565229Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" returns successfully" Jan 29 15:57:08.519448 containerd[1473]: time="2025-01-29T15:57:08.519162527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:1,}" Jan 29 15:57:08.519544 kubelet[2572]: I0129 15:57:08.519520 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7" Jan 29 15:57:08.519980 containerd[1473]: time="2025-01-29T15:57:08.519956564Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\"" Jan 29 15:57:08.520124 containerd[1473]: time="2025-01-29T15:57:08.520106748Z" level=info msg="Ensure that sandbox 2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7 in task-service has been cleanup successfully" Jan 29 15:57:08.520753 containerd[1473]: time="2025-01-29T15:57:08.519368465Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\"" Jan 29 15:57:08.521286 containerd[1473]: time="2025-01-29T15:57:08.520861989Z" level=info msg="Ensure that sandbox fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc in task-service has been cleanup successfully" Jan 29 15:57:08.521286 containerd[1473]: time="2025-01-29T15:57:08.520985736Z" level=info msg="TearDown network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" successfully" Jan 29 15:57:08.521286 containerd[1473]: time="2025-01-29T15:57:08.520998175Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" returns successfully" Jan 29 15:57:08.521286 containerd[1473]: time="2025-01-29T15:57:08.521093365Z" level=info msg="TearDown network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" successfully" Jan 29 15:57:08.521286 containerd[1473]: time="2025-01-29T15:57:08.521110883Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" returns successfully" Jan 29 15:57:08.522545 containerd[1473]: time="2025-01-29T15:57:08.521555756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:1,}" Jan 29 15:57:08.522545 containerd[1473]: time="2025-01-29T15:57:08.521904120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:1,}" Jan 29 15:57:08.524214 kubelet[2572]: I0129 15:57:08.524180 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9" Jan 29 15:57:08.526148 containerd[1473]: time="2025-01-29T15:57:08.525921019Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\"" Jan 29 15:57:08.526856 containerd[1473]: time="2025-01-29T15:57:08.526818285Z" level=info msg="Ensure that sandbox 9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9 in task-service has been cleanup successfully" Jan 29 15:57:08.527015 containerd[1473]: time="2025-01-29T15:57:08.526972429Z" level=info msg="TearDown network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" successfully" Jan 29 15:57:08.527015 containerd[1473]: time="2025-01-29T15:57:08.526989267Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" returns successfully" Jan 29 15:57:08.527677 kubelet[2572]: E0129 15:57:08.527518 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:08.528154 kubelet[2572]: I0129 15:57:08.528132 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061" Jan 29 15:57:08.528555 containerd[1473]: time="2025-01-29T15:57:08.528470752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:1,}" Jan 29 15:57:08.529154 containerd[1473]: time="2025-01-29T15:57:08.529129683Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\"" Jan 29 15:57:08.533342 kubelet[2572]: E0129 15:57:08.532989 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:08.534137 containerd[1473]: time="2025-01-29T15:57:08.534065207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 15:57:08.539017 containerd[1473]: time="2025-01-29T15:57:08.538935617Z" level=info msg="Ensure that sandbox e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061 in task-service has been cleanup successfully" Jan 29 15:57:08.539189 containerd[1473]: time="2025-01-29T15:57:08.539156554Z" level=info msg="TearDown network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" successfully" Jan 29 15:57:08.539189 containerd[1473]: time="2025-01-29T15:57:08.539170072Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" returns successfully" Jan 29 15:57:08.539448 kubelet[2572]: E0129 15:57:08.539406 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:08.540676 containerd[1473]: time="2025-01-29T15:57:08.540635519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:1,}" Jan 29 15:57:08.624524 containerd[1473]: time="2025-01-29T15:57:08.624401470Z" level=error msg="Failed to destroy network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.626301 containerd[1473]: time="2025-01-29T15:57:08.626254356Z" level=error msg="encountered an error cleaning up failed sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.626494 containerd[1473]: time="2025-01-29T15:57:08.626345026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.626802 kubelet[2572]: E0129 15:57:08.626742 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.626872 kubelet[2572]: E0129 15:57:08.626802 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:08.626872 kubelet[2572]: E0129 15:57:08.626826 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:08.626928 kubelet[2572]: E0129 15:57:08.626862 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" podUID="eb0b4103-78c0-4eef-8691-75f802520548" Jan 29 15:57:08.633796 containerd[1473]: time="2025-01-29T15:57:08.633393409Z" level=error msg="Failed to destroy network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.634054 containerd[1473]: time="2025-01-29T15:57:08.634024263Z" level=error msg="encountered an error cleaning up failed sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.634297 containerd[1473]: time="2025-01-29T15:57:08.634270397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.635121 kubelet[2572]: E0129 15:57:08.634778 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.635530 kubelet[2572]: E0129 15:57:08.635370 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:08.635530 kubelet[2572]: E0129 15:57:08.635427 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:08.635530 kubelet[2572]: E0129 15:57:08.635480 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" podUID="cc63fcad-580a-4790-b92e-85d54cee6129" Jan 29 15:57:08.637965 containerd[1473]: time="2025-01-29T15:57:08.637897137Z" level=error msg="Failed to destroy network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.638975 containerd[1473]: time="2025-01-29T15:57:08.638808922Z" level=error msg="encountered an error cleaning up failed sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.638975 containerd[1473]: time="2025-01-29T15:57:08.638870715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.639163 kubelet[2572]: E0129 15:57:08.639042 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.639163 kubelet[2572]: E0129 15:57:08.639115 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:08.639163 kubelet[2572]: E0129 15:57:08.639149 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:08.639341 kubelet[2572]: E0129 15:57:08.639189 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x5c64" podUID="e13e233f-36bf-4ccc-9393-e6e06b49a20a" Jan 29 15:57:08.640750 containerd[1473]: time="2025-01-29T15:57:08.640644090Z" level=error msg="Failed to destroy network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.642214 containerd[1473]: time="2025-01-29T15:57:08.642145172Z" level=error msg="encountered an error cleaning up failed sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.642796 containerd[1473]: time="2025-01-29T15:57:08.642735031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.643014 kubelet[2572]: E0129 15:57:08.642903 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.643014 kubelet[2572]: E0129 15:57:08.642946 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:08.643014 kubelet[2572]: E0129 15:57:08.642967 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:08.643100 kubelet[2572]: E0129 15:57:08.643008 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bcmc" podUID="ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef" Jan 29 15:57:08.644332 containerd[1473]: time="2025-01-29T15:57:08.644295667Z" level=error msg="Failed to destroy network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.644818 containerd[1473]: time="2025-01-29T15:57:08.644753739Z" level=error msg="encountered an error cleaning up failed sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.644982 containerd[1473]: time="2025-01-29T15:57:08.644905243Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.645208 kubelet[2572]: E0129 15:57:08.645170 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:08.645254 kubelet[2572]: E0129 15:57:08.645217 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:08.645254 kubelet[2572]: E0129 15:57:08.645232 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:08.645325 kubelet[2572]: E0129 15:57:08.645265 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" podUID="6b44de13-eff2-42e8-844c-3aa53fc7af03" Jan 29 15:57:08.873045 systemd[1]: run-netns-cni\x2df47d0db9\x2d87c8\x2df49f\x2d242c\x2d369b05239fdd.mount: Deactivated successfully. Jan 29 15:57:08.873133 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9-shm.mount: Deactivated successfully. Jan 29 15:57:08.873185 systemd[1]: run-netns-cni\x2da4a2513f\x2d3d5c\x2d2b7a\x2d765e\x2df0bddfafa179.mount: Deactivated successfully. Jan 29 15:57:08.873228 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061-shm.mount: Deactivated successfully. Jan 29 15:57:09.438126 systemd[1]: Created slice kubepods-besteffort-pod8d0ca05b_3272_4e38_9a00_746f382615ae.slice - libcontainer container kubepods-besteffort-pod8d0ca05b_3272_4e38_9a00_746f382615ae.slice. Jan 29 15:57:09.439961 containerd[1473]: time="2025-01-29T15:57:09.439923853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:0,}" Jan 29 15:57:09.494316 containerd[1473]: time="2025-01-29T15:57:09.494274661Z" level=error msg="Failed to destroy network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.494829 containerd[1473]: time="2025-01-29T15:57:09.494726096Z" level=error msg="encountered an error cleaning up failed sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.494829 containerd[1473]: time="2025-01-29T15:57:09.494785170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.495154 kubelet[2572]: E0129 15:57:09.495065 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.495154 kubelet[2572]: E0129 15:57:09.495126 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:09.495154 kubelet[2572]: E0129 15:57:09.495145 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:09.495922 kubelet[2572]: E0129 15:57:09.495187 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dqj97_calico-system(8d0ca05b-3272-4e38-9a00-746f382615ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dqj97_calico-system(8d0ca05b-3272-4e38-9a00-746f382615ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dqj97" podUID="8d0ca05b-3272-4e38-9a00-746f382615ae" Jan 29 15:57:09.497336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583-shm.mount: Deactivated successfully. Jan 29 15:57:09.535382 kubelet[2572]: I0129 15:57:09.535340 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c" Jan 29 15:57:09.535808 containerd[1473]: time="2025-01-29T15:57:09.535783172Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\"" Jan 29 15:57:09.535946 containerd[1473]: time="2025-01-29T15:57:09.535924798Z" level=info msg="Ensure that sandbox 163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c in task-service has been cleanup successfully" Jan 29 15:57:09.536122 containerd[1473]: time="2025-01-29T15:57:09.536094820Z" level=info msg="TearDown network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" successfully" Jan 29 15:57:09.536122 containerd[1473]: time="2025-01-29T15:57:09.536114658Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" returns successfully" Jan 29 15:57:09.537229 containerd[1473]: time="2025-01-29T15:57:09.537192309Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\"" Jan 29 15:57:09.538094 containerd[1473]: time="2025-01-29T15:57:09.537277580Z" level=info msg="TearDown network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" successfully" Jan 29 15:57:09.538094 containerd[1473]: time="2025-01-29T15:57:09.537293779Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" returns successfully" Jan 29 15:57:09.538180 containerd[1473]: time="2025-01-29T15:57:09.538098737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:2,}" Jan 29 15:57:09.538990 kubelet[2572]: I0129 15:57:09.538378 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46" Jan 29 15:57:09.538821 systemd[1]: run-netns-cni\x2dabeb4818\x2de4ae\x2dc8c5\x2dec99\x2d608ef378dc8c.mount: Deactivated successfully. Jan 29 15:57:09.540105 containerd[1473]: time="2025-01-29T15:57:09.539470758Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\"" Jan 29 15:57:09.540105 containerd[1473]: time="2025-01-29T15:57:09.539641101Z" level=info msg="Ensure that sandbox ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46 in task-service has been cleanup successfully" Jan 29 15:57:09.540105 containerd[1473]: time="2025-01-29T15:57:09.539957509Z" level=info msg="TearDown network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" successfully" Jan 29 15:57:09.540105 containerd[1473]: time="2025-01-29T15:57:09.539980026Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" returns successfully" Jan 29 15:57:09.541120 containerd[1473]: time="2025-01-29T15:57:09.540533770Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\"" Jan 29 15:57:09.541120 containerd[1473]: time="2025-01-29T15:57:09.540637240Z" level=info msg="TearDown network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" successfully" Jan 29 15:57:09.541120 containerd[1473]: time="2025-01-29T15:57:09.540648159Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" returns successfully" Jan 29 15:57:09.541508 containerd[1473]: time="2025-01-29T15:57:09.541453917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:2,}" Jan 29 15:57:09.541723 kubelet[2572]: I0129 15:57:09.541692 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787" Jan 29 15:57:09.542119 containerd[1473]: time="2025-01-29T15:57:09.542098771Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\"" Jan 29 15:57:09.542511 containerd[1473]: time="2025-01-29T15:57:09.542418819Z" level=info msg="Ensure that sandbox 2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787 in task-service has been cleanup successfully" Jan 29 15:57:09.542854 containerd[1473]: time="2025-01-29T15:57:09.542756505Z" level=info msg="TearDown network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" successfully" Jan 29 15:57:09.542854 containerd[1473]: time="2025-01-29T15:57:09.542775623Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" returns successfully" Jan 29 15:57:09.543153 containerd[1473]: time="2025-01-29T15:57:09.543128027Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\"" Jan 29 15:57:09.543234 containerd[1473]: time="2025-01-29T15:57:09.543213658Z" level=info msg="TearDown network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" successfully" Jan 29 15:57:09.543234 containerd[1473]: time="2025-01-29T15:57:09.543224497Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" returns successfully" Jan 29 15:57:09.543505 kubelet[2572]: I0129 15:57:09.543226 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147" Jan 29 15:57:09.543505 kubelet[2572]: E0129 15:57:09.543462 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:09.543763 containerd[1473]: time="2025-01-29T15:57:09.543744685Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\"" Jan 29 15:57:09.544248 containerd[1473]: time="2025-01-29T15:57:09.543895389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:2,}" Jan 29 15:57:09.544248 containerd[1473]: time="2025-01-29T15:57:09.544113487Z" level=info msg="Ensure that sandbox 951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147 in task-service has been cleanup successfully" Jan 29 15:57:09.547612 containerd[1473]: time="2025-01-29T15:57:09.544396418Z" level=info msg="TearDown network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" successfully" Jan 29 15:57:09.547612 containerd[1473]: time="2025-01-29T15:57:09.546884046Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" returns successfully" Jan 29 15:57:09.547612 containerd[1473]: time="2025-01-29T15:57:09.547577736Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\"" Jan 29 15:57:09.548010 containerd[1473]: time="2025-01-29T15:57:09.547701323Z" level=info msg="TearDown network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" successfully" Jan 29 15:57:09.548064 containerd[1473]: time="2025-01-29T15:57:09.548011692Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" returns successfully" Jan 29 15:57:09.548635 containerd[1473]: time="2025-01-29T15:57:09.548581114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:2,}" Jan 29 15:57:09.549194 kubelet[2572]: I0129 15:57:09.549168 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4" Jan 29 15:57:09.550976 containerd[1473]: time="2025-01-29T15:57:09.550880041Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\"" Jan 29 15:57:09.551096 containerd[1473]: time="2025-01-29T15:57:09.551054823Z" level=info msg="Ensure that sandbox 76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4 in task-service has been cleanup successfully" Jan 29 15:57:09.551240 containerd[1473]: time="2025-01-29T15:57:09.551202888Z" level=info msg="TearDown network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" successfully" Jan 29 15:57:09.551240 containerd[1473]: time="2025-01-29T15:57:09.551236525Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" returns successfully" Jan 29 15:57:09.551477 kubelet[2572]: I0129 15:57:09.551452 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583" Jan 29 15:57:09.551884 containerd[1473]: time="2025-01-29T15:57:09.551842743Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\"" Jan 29 15:57:09.552160 containerd[1473]: time="2025-01-29T15:57:09.552132954Z" level=info msg="TearDown network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" successfully" Jan 29 15:57:09.552160 containerd[1473]: time="2025-01-29T15:57:09.552154632Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" returns successfully" Jan 29 15:57:09.552435 kubelet[2572]: E0129 15:57:09.552414 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:09.552597 containerd[1473]: time="2025-01-29T15:57:09.552554751Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\"" Jan 29 15:57:09.552973 containerd[1473]: time="2025-01-29T15:57:09.552937272Z" level=info msg="Ensure that sandbox e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583 in task-service has been cleanup successfully" Jan 29 15:57:09.553234 containerd[1473]: time="2025-01-29T15:57:09.553188927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:2,}" Jan 29 15:57:09.553943 containerd[1473]: time="2025-01-29T15:57:09.553907454Z" level=info msg="TearDown network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" successfully" Jan 29 15:57:09.553943 containerd[1473]: time="2025-01-29T15:57:09.553935171Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" returns successfully" Jan 29 15:57:09.554520 containerd[1473]: time="2025-01-29T15:57:09.554496274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:1,}" Jan 29 15:57:09.620948 containerd[1473]: time="2025-01-29T15:57:09.620898780Z" level=error msg="Failed to destroy network for sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.621776 containerd[1473]: time="2025-01-29T15:57:09.621749094Z" level=error msg="encountered an error cleaning up failed sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.621908 containerd[1473]: time="2025-01-29T15:57:09.621887400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.622761 kubelet[2572]: E0129 15:57:09.622711 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.622846 kubelet[2572]: E0129 15:57:09.622773 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:09.622846 kubelet[2572]: E0129 15:57:09.622799 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:09.622894 kubelet[2572]: E0129 15:57:09.622841 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" podUID="cc63fcad-580a-4790-b92e-85d54cee6129" Jan 29 15:57:09.663865 containerd[1473]: time="2025-01-29T15:57:09.663711198Z" level=error msg="Failed to destroy network for sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.664805 containerd[1473]: time="2025-01-29T15:57:09.664772651Z" level=error msg="encountered an error cleaning up failed sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.664920 containerd[1473]: time="2025-01-29T15:57:09.664834525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.665363 kubelet[2572]: E0129 15:57:09.665132 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.665363 kubelet[2572]: E0129 15:57:09.665250 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:09.665363 kubelet[2572]: E0129 15:57:09.665280 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:09.665475 kubelet[2572]: E0129 15:57:09.665323 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" podUID="6b44de13-eff2-42e8-844c-3aa53fc7af03" Jan 29 15:57:09.682209 containerd[1473]: time="2025-01-29T15:57:09.682065657Z" level=error msg="Failed to destroy network for sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.683118 containerd[1473]: time="2025-01-29T15:57:09.683079994Z" level=error msg="encountered an error cleaning up failed sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.683287 containerd[1473]: time="2025-01-29T15:57:09.683149787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.683504 kubelet[2572]: E0129 15:57:09.683451 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.683573 kubelet[2572]: E0129 15:57:09.683507 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:09.683573 kubelet[2572]: E0129 15:57:09.683529 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:09.683633 kubelet[2572]: E0129 15:57:09.683563 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x5c64" podUID="e13e233f-36bf-4ccc-9393-e6e06b49a20a" Jan 29 15:57:09.685647 containerd[1473]: time="2025-01-29T15:57:09.685270092Z" level=error msg="Failed to destroy network for sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.686268 containerd[1473]: time="2025-01-29T15:57:09.686215956Z" level=error msg="encountered an error cleaning up failed sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.686324 containerd[1473]: time="2025-01-29T15:57:09.686276710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.686498 kubelet[2572]: E0129 15:57:09.686446 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.686543 kubelet[2572]: E0129 15:57:09.686506 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:09.686543 kubelet[2572]: E0129 15:57:09.686526 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:09.686607 kubelet[2572]: E0129 15:57:09.686557 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dqj97_calico-system(8d0ca05b-3272-4e38-9a00-746f382615ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dqj97_calico-system(8d0ca05b-3272-4e38-9a00-746f382615ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dqj97" podUID="8d0ca05b-3272-4e38-9a00-746f382615ae" Jan 29 15:57:09.689811 containerd[1473]: time="2025-01-29T15:57:09.689706482Z" level=error msg="Failed to destroy network for sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.692116 containerd[1473]: time="2025-01-29T15:57:09.692063803Z" level=error msg="encountered an error cleaning up failed sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.692164 containerd[1473]: time="2025-01-29T15:57:09.692138316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.692434 kubelet[2572]: E0129 15:57:09.692396 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.692473 kubelet[2572]: E0129 15:57:09.692440 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:09.692473 kubelet[2572]: E0129 15:57:09.692457 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:09.692520 kubelet[2572]: E0129 15:57:09.692491 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bcmc" podUID="ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef" Jan 29 15:57:09.702596 containerd[1473]: time="2025-01-29T15:57:09.702535861Z" level=error msg="Failed to destroy network for sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.702906 containerd[1473]: time="2025-01-29T15:57:09.702879426Z" level=error msg="encountered an error cleaning up failed sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.702955 containerd[1473]: time="2025-01-29T15:57:09.702935301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.703255 kubelet[2572]: E0129 15:57:09.703115 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:09.703255 kubelet[2572]: E0129 15:57:09.703158 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:09.703255 kubelet[2572]: E0129 15:57:09.703173 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:09.703360 kubelet[2572]: E0129 15:57:09.703216 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" podUID="eb0b4103-78c0-4eef-8691-75f802520548" Jan 29 15:57:09.873369 systemd[1]: run-netns-cni\x2dd886b378\x2daeb8\x2de25e\x2d7a19\x2df7cae0d5f870.mount: Deactivated successfully. Jan 29 15:57:09.873455 systemd[1]: run-netns-cni\x2d8cb312e5\x2d42b2\x2d82e6\x2da0fc\x2dfed596a153cd.mount: Deactivated successfully. Jan 29 15:57:09.873506 systemd[1]: run-netns-cni\x2d06a5899d\x2dc357\x2d934a\x2d8d61\x2d57ced0b6d32c.mount: Deactivated successfully. Jan 29 15:57:09.873550 systemd[1]: run-netns-cni\x2d44e282ed\x2d4d01\x2d7103\x2dbe86\x2d5aa54986fcf0.mount: Deactivated successfully. Jan 29 15:57:09.873607 systemd[1]: run-netns-cni\x2d22bfa184\x2d0af0\x2d7cc0\x2dbfd7\x2d951ac72917cd.mount: Deactivated successfully. Jan 29 15:57:10.555339 kubelet[2572]: I0129 15:57:10.555190 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169" Jan 29 15:57:10.555869 containerd[1473]: time="2025-01-29T15:57:10.555832045Z" level=info msg="StopPodSandbox for \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\"" Jan 29 15:57:10.556944 containerd[1473]: time="2025-01-29T15:57:10.556017867Z" level=info msg="Ensure that sandbox b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169 in task-service has been cleanup successfully" Jan 29 15:57:10.557549 kubelet[2572]: I0129 15:57:10.557526 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9" Jan 29 15:57:10.557989 containerd[1473]: time="2025-01-29T15:57:10.557922520Z" level=info msg="StopPodSandbox for \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\"" Jan 29 15:57:10.558085 containerd[1473]: time="2025-01-29T15:57:10.558060346Z" level=info msg="Ensure that sandbox d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9 in task-service has been cleanup successfully" Jan 29 15:57:10.558137 containerd[1473]: time="2025-01-29T15:57:10.558112181Z" level=info msg="TearDown network for sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" successfully" Jan 29 15:57:10.558137 containerd[1473]: time="2025-01-29T15:57:10.558129019Z" level=info msg="StopPodSandbox for \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" returns successfully" Jan 29 15:57:10.558699 containerd[1473]: time="2025-01-29T15:57:10.558537899Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\"" Jan 29 15:57:10.558699 containerd[1473]: time="2025-01-29T15:57:10.558616212Z" level=info msg="TearDown network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" successfully" Jan 29 15:57:10.558699 containerd[1473]: time="2025-01-29T15:57:10.558626691Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" returns successfully" Jan 29 15:57:10.559249 containerd[1473]: time="2025-01-29T15:57:10.558907063Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\"" Jan 29 15:57:10.559249 containerd[1473]: time="2025-01-29T15:57:10.558961218Z" level=info msg="TearDown network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" successfully" Jan 29 15:57:10.559249 containerd[1473]: time="2025-01-29T15:57:10.558969417Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" returns successfully" Jan 29 15:57:10.558909 systemd[1]: run-netns-cni\x2df745dff0\x2d34dc\x2d957f\x2d0bff\x2d6ed0df5a9475.mount: Deactivated successfully. Jan 29 15:57:10.559476 containerd[1473]: time="2025-01-29T15:57:10.559406454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:3,}" Jan 29 15:57:10.559812 kubelet[2572]: I0129 15:57:10.559733 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84" Jan 29 15:57:10.559869 containerd[1473]: time="2025-01-29T15:57:10.559770938Z" level=info msg="TearDown network for sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" successfully" Jan 29 15:57:10.559869 containerd[1473]: time="2025-01-29T15:57:10.559789576Z" level=info msg="StopPodSandbox for \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" returns successfully" Jan 29 15:57:10.560980 containerd[1473]: time="2025-01-29T15:57:10.560093306Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\"" Jan 29 15:57:10.560980 containerd[1473]: time="2025-01-29T15:57:10.560175418Z" level=info msg="TearDown network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" successfully" Jan 29 15:57:10.560980 containerd[1473]: time="2025-01-29T15:57:10.560184617Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" returns successfully" Jan 29 15:57:10.560980 containerd[1473]: time="2025-01-29T15:57:10.560212215Z" level=info msg="StopPodSandbox for \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\"" Jan 29 15:57:10.560980 containerd[1473]: time="2025-01-29T15:57:10.560343722Z" level=info msg="Ensure that sandbox b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84 in task-service has been cleanup successfully" Jan 29 15:57:10.560980 containerd[1473]: time="2025-01-29T15:57:10.560441152Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\"" Jan 29 15:57:10.560980 containerd[1473]: time="2025-01-29T15:57:10.560527304Z" level=info msg="TearDown network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" successfully" Jan 29 15:57:10.560980 containerd[1473]: time="2025-01-29T15:57:10.560537303Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" returns successfully" Jan 29 15:57:10.560980 containerd[1473]: time="2025-01-29T15:57:10.560939503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:3,}" Jan 29 15:57:10.561429 containerd[1473]: time="2025-01-29T15:57:10.561221476Z" level=info msg="TearDown network for sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" successfully" Jan 29 15:57:10.561429 containerd[1473]: time="2025-01-29T15:57:10.561241394Z" level=info msg="StopPodSandbox for \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" returns successfully" Jan 29 15:57:10.561546 containerd[1473]: time="2025-01-29T15:57:10.561519326Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\"" Jan 29 15:57:10.561652 containerd[1473]: time="2025-01-29T15:57:10.561636515Z" level=info msg="TearDown network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" successfully" Jan 29 15:57:10.561752 containerd[1473]: time="2025-01-29T15:57:10.561650433Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" returns successfully" Jan 29 15:57:10.561896 kubelet[2572]: I0129 15:57:10.561812 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81" Jan 29 15:57:10.561862 systemd[1]: run-netns-cni\x2d6d0b4b26\x2d6226\x2d509f\x2d008f\x2d2af32cbd57f1.mount: Deactivated successfully. Jan 29 15:57:10.562388 containerd[1473]: time="2025-01-29T15:57:10.562238536Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\"" Jan 29 15:57:10.562388 containerd[1473]: time="2025-01-29T15:57:10.562310689Z" level=info msg="TearDown network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" successfully" Jan 29 15:57:10.562388 containerd[1473]: time="2025-01-29T15:57:10.562319768Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" returns successfully" Jan 29 15:57:10.562388 containerd[1473]: time="2025-01-29T15:57:10.562365043Z" level=info msg="StopPodSandbox for \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\"" Jan 29 15:57:10.562710 containerd[1473]: time="2025-01-29T15:57:10.562493631Z" level=info msg="Ensure that sandbox 5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81 in task-service has been cleanup successfully" Jan 29 15:57:10.563163 containerd[1473]: time="2025-01-29T15:57:10.563119489Z" level=info msg="TearDown network for sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" successfully" Jan 29 15:57:10.563163 containerd[1473]: time="2025-01-29T15:57:10.563142567Z" level=info msg="StopPodSandbox for \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" returns successfully" Jan 29 15:57:10.563435 containerd[1473]: time="2025-01-29T15:57:10.563265875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:3,}" Jan 29 15:57:10.564705 containerd[1473]: time="2025-01-29T15:57:10.564679696Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\"" Jan 29 15:57:10.564775 containerd[1473]: time="2025-01-29T15:57:10.564759088Z" level=info msg="TearDown network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" successfully" Jan 29 15:57:10.564775 containerd[1473]: time="2025-01-29T15:57:10.564772407Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" returns successfully" Jan 29 15:57:10.564909 systemd[1]: run-netns-cni\x2dbebd364d\x2dc546\x2d05c5\x2d707a\x2d473e5fddc82a.mount: Deactivated successfully. Jan 29 15:57:10.564985 systemd[1]: run-netns-cni\x2dc41eb595\x2db645\x2da807\x2d665e\x2df19b6d300903.mount: Deactivated successfully. Jan 29 15:57:10.565243 containerd[1473]: time="2025-01-29T15:57:10.565227562Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\"" Jan 29 15:57:10.565303 containerd[1473]: time="2025-01-29T15:57:10.565288876Z" level=info msg="TearDown network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" successfully" Jan 29 15:57:10.565332 containerd[1473]: time="2025-01-29T15:57:10.565302555Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" returns successfully" Jan 29 15:57:10.565848 kubelet[2572]: E0129 15:57:10.565631 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:10.567006 containerd[1473]: time="2025-01-29T15:57:10.566956192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:3,}" Jan 29 15:57:10.567077 kubelet[2572]: I0129 15:57:10.566975 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671" Jan 29 15:57:10.568071 containerd[1473]: time="2025-01-29T15:57:10.568046405Z" level=info msg="StopPodSandbox for \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\"" Jan 29 15:57:10.568228 containerd[1473]: time="2025-01-29T15:57:10.568208269Z" level=info msg="Ensure that sandbox 2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671 in task-service has been cleanup successfully" Jan 29 15:57:10.568429 containerd[1473]: time="2025-01-29T15:57:10.568407930Z" level=info msg="TearDown network for sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" successfully" Jan 29 15:57:10.568473 containerd[1473]: time="2025-01-29T15:57:10.568427568Z" level=info msg="StopPodSandbox for \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" returns successfully" Jan 29 15:57:10.568867 containerd[1473]: time="2025-01-29T15:57:10.568667784Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\"" Jan 29 15:57:10.568867 containerd[1473]: time="2025-01-29T15:57:10.568739137Z" level=info msg="TearDown network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" successfully" Jan 29 15:57:10.568867 containerd[1473]: time="2025-01-29T15:57:10.568749736Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" returns successfully" Jan 29 15:57:10.569001 kubelet[2572]: I0129 15:57:10.568979 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c" Jan 29 15:57:10.569259 containerd[1473]: time="2025-01-29T15:57:10.569225769Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\"" Jan 29 15:57:10.569325 containerd[1473]: time="2025-01-29T15:57:10.569308241Z" level=info msg="TearDown network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" successfully" Jan 29 15:57:10.569325 containerd[1473]: time="2025-01-29T15:57:10.569321360Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" returns successfully" Jan 29 15:57:10.569529 kubelet[2572]: E0129 15:57:10.569496 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:10.570556 containerd[1473]: time="2025-01-29T15:57:10.570526281Z" level=info msg="StopPodSandbox for \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\"" Jan 29 15:57:10.570695 containerd[1473]: time="2025-01-29T15:57:10.570661428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:3,}" Jan 29 15:57:10.570741 containerd[1473]: time="2025-01-29T15:57:10.570726782Z" level=info msg="Ensure that sandbox 17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c in task-service has been cleanup successfully" Jan 29 15:57:10.571105 containerd[1473]: time="2025-01-29T15:57:10.570940361Z" level=info msg="TearDown network for sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" successfully" Jan 29 15:57:10.571105 containerd[1473]: time="2025-01-29T15:57:10.570958199Z" level=info msg="StopPodSandbox for \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" returns successfully" Jan 29 15:57:10.571363 containerd[1473]: time="2025-01-29T15:57:10.571326483Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\"" Jan 29 15:57:10.571572 containerd[1473]: time="2025-01-29T15:57:10.571407715Z" level=info msg="TearDown network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" successfully" Jan 29 15:57:10.571572 containerd[1473]: time="2025-01-29T15:57:10.571421074Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" returns successfully" Jan 29 15:57:10.571908 containerd[1473]: time="2025-01-29T15:57:10.571863470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:2,}" Jan 29 15:57:10.782887 containerd[1473]: time="2025-01-29T15:57:10.782811306Z" level=error msg="Failed to destroy network for sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.784348 containerd[1473]: time="2025-01-29T15:57:10.784312079Z" level=error msg="encountered an error cleaning up failed sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.784433 containerd[1473]: time="2025-01-29T15:57:10.784379952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.784977 kubelet[2572]: E0129 15:57:10.784560 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.784977 kubelet[2572]: E0129 15:57:10.784620 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:10.784977 kubelet[2572]: E0129 15:57:10.784646 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:10.785243 kubelet[2572]: E0129 15:57:10.784691 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" podUID="6b44de13-eff2-42e8-844c-3aa53fc7af03" Jan 29 15:57:10.794593 containerd[1473]: time="2025-01-29T15:57:10.793775989Z" level=error msg="Failed to destroy network for sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.794593 containerd[1473]: time="2025-01-29T15:57:10.794103597Z" level=error msg="encountered an error cleaning up failed sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.794593 containerd[1473]: time="2025-01-29T15:57:10.794157511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.794893 kubelet[2572]: E0129 15:57:10.794852 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.794956 kubelet[2572]: E0129 15:57:10.794915 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:10.794956 kubelet[2572]: E0129 15:57:10.794940 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:10.795023 kubelet[2572]: E0129 15:57:10.794979 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bcmc" podUID="ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef" Jan 29 15:57:10.800736 containerd[1473]: time="2025-01-29T15:57:10.800698069Z" level=error msg="Failed to destroy network for sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.801156 containerd[1473]: time="2025-01-29T15:57:10.801124107Z" level=error msg="encountered an error cleaning up failed sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.801238 containerd[1473]: time="2025-01-29T15:57:10.801194420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.801423 kubelet[2572]: E0129 15:57:10.801385 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.801481 kubelet[2572]: E0129 15:57:10.801451 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:10.801481 kubelet[2572]: E0129 15:57:10.801473 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:10.801544 kubelet[2572]: E0129 15:57:10.801516 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" podUID="cc63fcad-580a-4790-b92e-85d54cee6129" Jan 29 15:57:10.803785 containerd[1473]: time="2025-01-29T15:57:10.803749249Z" level=error msg="Failed to destroy network for sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.804106 containerd[1473]: time="2025-01-29T15:57:10.804063498Z" level=error msg="encountered an error cleaning up failed sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.804170 containerd[1473]: time="2025-01-29T15:57:10.804137211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.804540 kubelet[2572]: E0129 15:57:10.804380 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.804540 kubelet[2572]: E0129 15:57:10.804427 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:10.804540 kubelet[2572]: E0129 15:57:10.804444 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:10.804714 kubelet[2572]: E0129 15:57:10.804477 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" podUID="eb0b4103-78c0-4eef-8691-75f802520548" Jan 29 15:57:10.805216 containerd[1473]: time="2025-01-29T15:57:10.805181708Z" level=error msg="Failed to destroy network for sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.806208 containerd[1473]: time="2025-01-29T15:57:10.806121056Z" level=error msg="encountered an error cleaning up failed sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.806208 containerd[1473]: time="2025-01-29T15:57:10.806180770Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.806396 kubelet[2572]: E0129 15:57:10.806372 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.806801 kubelet[2572]: E0129 15:57:10.806667 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:10.806801 kubelet[2572]: E0129 15:57:10.806694 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:10.806801 kubelet[2572]: E0129 15:57:10.806729 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dqj97_calico-system(8d0ca05b-3272-4e38-9a00-746f382615ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dqj97_calico-system(8d0ca05b-3272-4e38-9a00-746f382615ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dqj97" podUID="8d0ca05b-3272-4e38-9a00-746f382615ae" Jan 29 15:57:10.820158 containerd[1473]: time="2025-01-29T15:57:10.820108682Z" level=error msg="Failed to destroy network for sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.820752 containerd[1473]: time="2025-01-29T15:57:10.820711463Z" level=error msg="encountered an error cleaning up failed sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.820814 containerd[1473]: time="2025-01-29T15:57:10.820775136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.821206 kubelet[2572]: E0129 15:57:10.821004 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:10.821206 kubelet[2572]: E0129 15:57:10.821056 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:10.821206 kubelet[2572]: E0129 15:57:10.821087 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:10.821325 kubelet[2572]: E0129 15:57:10.821126 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x5c64" podUID="e13e233f-36bf-4ccc-9393-e6e06b49a20a" Jan 29 15:57:10.874370 systemd[1]: run-netns-cni\x2df182c303\x2d887a\x2dfd29\x2dfb85\x2d54c24b6834fa.mount: Deactivated successfully. Jan 29 15:57:10.874453 systemd[1]: run-netns-cni\x2dac0ccf8e\x2db29c\x2df0af\x2d75e7\x2ddb5305174bf8.mount: Deactivated successfully. Jan 29 15:57:11.573102 kubelet[2572]: I0129 15:57:11.573071 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7" Jan 29 15:57:11.575929 kubelet[2572]: I0129 15:57:11.575898 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f" Jan 29 15:57:11.579132 kubelet[2572]: I0129 15:57:11.578648 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1" Jan 29 15:57:11.582091 kubelet[2572]: I0129 15:57:11.582060 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3" Jan 29 15:57:11.582189 containerd[1473]: time="2025-01-29T15:57:11.582061011Z" level=info msg="StopPodSandbox for \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\"" Jan 29 15:57:11.582646 containerd[1473]: time="2025-01-29T15:57:11.582620477Z" level=info msg="Ensure that sandbox 6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f in task-service has been cleanup successfully" Jan 29 15:57:11.582902 containerd[1473]: time="2025-01-29T15:57:11.582864574Z" level=info msg="TearDown network for sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\" successfully" Jan 29 15:57:11.582985 containerd[1473]: time="2025-01-29T15:57:11.582969764Z" level=info msg="StopPodSandbox for \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\" returns successfully" Jan 29 15:57:11.583131 containerd[1473]: time="2025-01-29T15:57:11.583108471Z" level=info msg="StopPodSandbox for \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\"" Jan 29 15:57:11.583327 containerd[1473]: time="2025-01-29T15:57:11.583306892Z" level=info msg="Ensure that sandbox c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7 in task-service has been cleanup successfully" Jan 29 15:57:11.583604 containerd[1473]: time="2025-01-29T15:57:11.583370726Z" level=info msg="StopPodSandbox for \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\"" Jan 29 15:57:11.583806 containerd[1473]: time="2025-01-29T15:57:11.583785807Z" level=info msg="Ensure that sandbox 4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3 in task-service has been cleanup successfully" Jan 29 15:57:11.584178 containerd[1473]: time="2025-01-29T15:57:11.583942632Z" level=info msg="StopPodSandbox for \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\"" Jan 29 15:57:11.584324 containerd[1473]: time="2025-01-29T15:57:11.584246763Z" level=info msg="TearDown network for sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\" successfully" Jan 29 15:57:11.584324 containerd[1473]: time="2025-01-29T15:57:11.584272240Z" level=info msg="StopPodSandbox for \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\" returns successfully" Jan 29 15:57:11.584450 containerd[1473]: time="2025-01-29T15:57:11.584362912Z" level=info msg="Ensure that sandbox ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1 in task-service has been cleanup successfully" Jan 29 15:57:11.584581 containerd[1473]: time="2025-01-29T15:57:11.583468437Z" level=info msg="StopPodSandbox for \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\"" Jan 29 15:57:11.584812 containerd[1473]: time="2025-01-29T15:57:11.584530096Z" level=info msg="TearDown network for sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\" successfully" Jan 29 15:57:11.584812 containerd[1473]: time="2025-01-29T15:57:11.584737836Z" level=info msg="StopPodSandbox for \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\" returns successfully" Jan 29 15:57:11.584812 containerd[1473]: time="2025-01-29T15:57:11.584557053Z" level=info msg="TearDown network for sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\" successfully" Jan 29 15:57:11.584812 containerd[1473]: time="2025-01-29T15:57:11.584782432Z" level=info msg="StopPodSandbox for \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\" returns successfully" Jan 29 15:57:11.585019 containerd[1473]: time="2025-01-29T15:57:11.584951576Z" level=info msg="TearDown network for sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" successfully" Jan 29 15:57:11.585019 containerd[1473]: time="2025-01-29T15:57:11.584968374Z" level=info msg="StopPodSandbox for \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" returns successfully" Jan 29 15:57:11.585329 containerd[1473]: time="2025-01-29T15:57:11.585305782Z" level=info msg="StopPodSandbox for \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\"" Jan 29 15:57:11.585450 containerd[1473]: time="2025-01-29T15:57:11.585434250Z" level=info msg="TearDown network for sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" successfully" Jan 29 15:57:11.585447 systemd[1]: run-netns-cni\x2d1cd386a7\x2d95be\x2d576d\x2d59d9\x2d7fa0bc83161f.mount: Deactivated successfully. Jan 29 15:57:11.585532 systemd[1]: run-netns-cni\x2d752aef74\x2d6771\x2d3ff9\x2d0210\x2dd64d719658c4.mount: Deactivated successfully. Jan 29 15:57:11.586057 containerd[1473]: time="2025-01-29T15:57:11.585669147Z" level=info msg="StopPodSandbox for \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" returns successfully" Jan 29 15:57:11.586057 containerd[1473]: time="2025-01-29T15:57:11.585748900Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\"" Jan 29 15:57:11.586057 containerd[1473]: time="2025-01-29T15:57:11.585808054Z" level=info msg="TearDown network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" successfully" Jan 29 15:57:11.586057 containerd[1473]: time="2025-01-29T15:57:11.585820253Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" returns successfully" Jan 29 15:57:11.586057 containerd[1473]: time="2025-01-29T15:57:11.585877487Z" level=info msg="StopPodSandbox for \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\"" Jan 29 15:57:11.586318 containerd[1473]: time="2025-01-29T15:57:11.585912124Z" level=info msg="StopPodSandbox for \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\"" Jan 29 15:57:11.586487 containerd[1473]: time="2025-01-29T15:57:11.585930082Z" level=info msg="TearDown network for sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" successfully" Jan 29 15:57:11.586612 containerd[1473]: time="2025-01-29T15:57:11.586538865Z" level=info msg="StopPodSandbox for \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" returns successfully" Jan 29 15:57:11.586841 containerd[1473]: time="2025-01-29T15:57:11.586483150Z" level=info msg="TearDown network for sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" successfully" Jan 29 15:57:11.587224 containerd[1473]: time="2025-01-29T15:57:11.586724087Z" level=info msg="StopPodSandbox for \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" returns successfully" Jan 29 15:57:11.588083 containerd[1473]: time="2025-01-29T15:57:11.587720472Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\"" Jan 29 15:57:11.588598 containerd[1473]: time="2025-01-29T15:57:11.587730031Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\"" Jan 29 15:57:11.588598 containerd[1473]: time="2025-01-29T15:57:11.588214665Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\"" Jan 29 15:57:11.588598 containerd[1473]: time="2025-01-29T15:57:11.588290938Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\"" Jan 29 15:57:11.588598 containerd[1473]: time="2025-01-29T15:57:11.588348332Z" level=info msg="TearDown network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" successfully" Jan 29 15:57:11.588598 containerd[1473]: time="2025-01-29T15:57:11.588356971Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" returns successfully" Jan 29 15:57:11.588598 containerd[1473]: time="2025-01-29T15:57:11.588297297Z" level=info msg="TearDown network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" successfully" Jan 29 15:57:11.588598 containerd[1473]: time="2025-01-29T15:57:11.588398408Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" returns successfully" Jan 29 15:57:11.588598 containerd[1473]: time="2025-01-29T15:57:11.588221584Z" level=info msg="TearDown network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" successfully" Jan 29 15:57:11.588598 containerd[1473]: time="2025-01-29T15:57:11.588421565Z" level=info msg="TearDown network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" successfully" Jan 29 15:57:11.588794 containerd[1473]: time="2025-01-29T15:57:11.588630385Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" returns successfully" Jan 29 15:57:11.588794 containerd[1473]: time="2025-01-29T15:57:11.588422885Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" returns successfully" Jan 29 15:57:11.588895 systemd[1]: run-netns-cni\x2de1af3a83\x2d84d8\x2dc11a\x2d2fbf\x2db8f7188dabed.mount: Deactivated successfully. Jan 29 15:57:11.588976 systemd[1]: run-netns-cni\x2d30e363f8\x2d4a05\x2d5a7b\x2dc5d4\x2d8188eef0980c.mount: Deactivated successfully. Jan 29 15:57:11.589126 containerd[1473]: time="2025-01-29T15:57:11.589104220Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\"" Jan 29 15:57:11.589399 containerd[1473]: time="2025-01-29T15:57:11.589178413Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\"" Jan 29 15:57:11.589901 containerd[1473]: time="2025-01-29T15:57:11.589534859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:4,}" Jan 29 15:57:11.589901 containerd[1473]: time="2025-01-29T15:57:11.589673966Z" level=info msg="TearDown network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" successfully" Jan 29 15:57:11.589901 containerd[1473]: time="2025-01-29T15:57:11.589899665Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" returns successfully" Jan 29 15:57:11.590073 containerd[1473]: time="2025-01-29T15:57:11.589738960Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\"" Jan 29 15:57:11.590811 containerd[1473]: time="2025-01-29T15:57:11.590772262Z" level=info msg="TearDown network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" successfully" Jan 29 15:57:11.590811 containerd[1473]: time="2025-01-29T15:57:11.590796939Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" returns successfully" Jan 29 15:57:11.590975 containerd[1473]: time="2025-01-29T15:57:11.590952085Z" level=info msg="TearDown network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" successfully" Jan 29 15:57:11.591047 containerd[1473]: time="2025-01-29T15:57:11.591032517Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" returns successfully" Jan 29 15:57:11.591383 containerd[1473]: time="2025-01-29T15:57:11.591361086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:4,}" Jan 29 15:57:11.591652 kubelet[2572]: E0129 15:57:11.591331 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:11.592286 kubelet[2572]: I0129 15:57:11.591527 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d" Jan 29 15:57:11.592350 containerd[1473]: time="2025-01-29T15:57:11.591996945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:4,}" Jan 29 15:57:11.592350 containerd[1473]: time="2025-01-29T15:57:11.592013463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:4,}" Jan 29 15:57:11.592350 containerd[1473]: time="2025-01-29T15:57:11.592240322Z" level=info msg="StopPodSandbox for \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\"" Jan 29 15:57:11.592439 containerd[1473]: time="2025-01-29T15:57:11.592368910Z" level=info msg="Ensure that sandbox aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d in task-service has been cleanup successfully" Jan 29 15:57:11.592624 containerd[1473]: time="2025-01-29T15:57:11.592578770Z" level=info msg="TearDown network for sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\" successfully" Jan 29 15:57:11.592624 containerd[1473]: time="2025-01-29T15:57:11.592622046Z" level=info msg="StopPodSandbox for \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\" returns successfully" Jan 29 15:57:11.593190 containerd[1473]: time="2025-01-29T15:57:11.592909778Z" level=info msg="StopPodSandbox for \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\"" Jan 29 15:57:11.593332 containerd[1473]: time="2025-01-29T15:57:11.593315060Z" level=info msg="TearDown network for sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" successfully" Jan 29 15:57:11.593384 containerd[1473]: time="2025-01-29T15:57:11.593371974Z" level=info msg="StopPodSandbox for \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" returns successfully" Jan 29 15:57:11.593788 containerd[1473]: time="2025-01-29T15:57:11.593760457Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\"" Jan 29 15:57:11.593863 containerd[1473]: time="2025-01-29T15:57:11.593847369Z" level=info msg="TearDown network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" successfully" Jan 29 15:57:11.593863 containerd[1473]: time="2025-01-29T15:57:11.593861408Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" returns successfully" Jan 29 15:57:11.594311 containerd[1473]: time="2025-01-29T15:57:11.594272129Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\"" Jan 29 15:57:11.594368 containerd[1473]: time="2025-01-29T15:57:11.594351081Z" level=info msg="TearDown network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" successfully" Jan 29 15:57:11.594368 containerd[1473]: time="2025-01-29T15:57:11.594365400Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" returns successfully" Jan 29 15:57:11.595274 kubelet[2572]: E0129 15:57:11.594772 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:11.595274 kubelet[2572]: I0129 15:57:11.595081 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401" Jan 29 15:57:11.595457 containerd[1473]: time="2025-01-29T15:57:11.595135166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:4,}" Jan 29 15:57:11.595842 systemd[1]: run-netns-cni\x2d41b9e824\x2deb1f\x2d9bb6\x2d4fc5\x2dfb6f00fff34f.mount: Deactivated successfully. Jan 29 15:57:11.596025 containerd[1473]: time="2025-01-29T15:57:11.595994925Z" level=info msg="StopPodSandbox for \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\"" Jan 29 15:57:11.596158 containerd[1473]: time="2025-01-29T15:57:11.596139671Z" level=info msg="Ensure that sandbox 9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401 in task-service has been cleanup successfully" Jan 29 15:57:11.597631 containerd[1473]: time="2025-01-29T15:57:11.597057863Z" level=info msg="TearDown network for sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\" successfully" Jan 29 15:57:11.597631 containerd[1473]: time="2025-01-29T15:57:11.597082181Z" level=info msg="StopPodSandbox for \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\" returns successfully" Jan 29 15:57:11.597631 containerd[1473]: time="2025-01-29T15:57:11.597498861Z" level=info msg="StopPodSandbox for \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\"" Jan 29 15:57:11.597745 containerd[1473]: time="2025-01-29T15:57:11.597665846Z" level=info msg="TearDown network for sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" successfully" Jan 29 15:57:11.597745 containerd[1473]: time="2025-01-29T15:57:11.597711881Z" level=info msg="StopPodSandbox for \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" returns successfully" Jan 29 15:57:11.598087 containerd[1473]: time="2025-01-29T15:57:11.598056128Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\"" Jan 29 15:57:11.598920 containerd[1473]: time="2025-01-29T15:57:11.598187276Z" level=info msg="TearDown network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" successfully" Jan 29 15:57:11.598920 containerd[1473]: time="2025-01-29T15:57:11.598200235Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" returns successfully" Jan 29 15:57:11.598920 containerd[1473]: time="2025-01-29T15:57:11.598669190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:3,}" Jan 29 15:57:11.598730 systemd[1]: run-netns-cni\x2ddbdcfe30\x2d48d0\x2d330a\x2dcdd5\x2db350e8c0f783.mount: Deactivated successfully. Jan 29 15:57:11.943659 containerd[1473]: time="2025-01-29T15:57:11.941795094Z" level=error msg="Failed to destroy network for sandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.943659 containerd[1473]: time="2025-01-29T15:57:11.941995195Z" level=error msg="Failed to destroy network for sandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.943659 containerd[1473]: time="2025-01-29T15:57:11.942482428Z" level=error msg="encountered an error cleaning up failed sandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.943659 containerd[1473]: time="2025-01-29T15:57:11.942534544Z" level=error msg="encountered an error cleaning up failed sandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.943659 containerd[1473]: time="2025-01-29T15:57:11.942582619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.943659 containerd[1473]: time="2025-01-29T15:57:11.942544503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.944927 kubelet[2572]: E0129 15:57:11.942864 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.944927 kubelet[2572]: E0129 15:57:11.942919 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:11.944927 kubelet[2572]: E0129 15:57:11.942937 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:11.945124 kubelet[2572]: E0129 15:57:11.942985 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x5c64" podUID="e13e233f-36bf-4ccc-9393-e6e06b49a20a" Jan 29 15:57:11.945124 kubelet[2572]: E0129 15:57:11.942837 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.945124 kubelet[2572]: E0129 15:57:11.943247 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:11.945212 kubelet[2572]: E0129 15:57:11.943263 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:11.945212 kubelet[2572]: E0129 15:57:11.943291 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" podUID="cc63fcad-580a-4790-b92e-85d54cee6129" Jan 29 15:57:11.946918 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d-shm.mount: Deactivated successfully. Jan 29 15:57:11.947026 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70-shm.mount: Deactivated successfully. Jan 29 15:57:11.957089 containerd[1473]: time="2025-01-29T15:57:11.956883178Z" level=error msg="Failed to destroy network for sandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.957307 containerd[1473]: time="2025-01-29T15:57:11.957277700Z" level=error msg="encountered an error cleaning up failed sandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.957558 containerd[1473]: time="2025-01-29T15:57:11.957341294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.957833 kubelet[2572]: E0129 15:57:11.957722 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.957833 kubelet[2572]: E0129 15:57:11.957779 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:11.957833 kubelet[2572]: E0129 15:57:11.957798 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:11.957990 kubelet[2572]: E0129 15:57:11.957838 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" podUID="6b44de13-eff2-42e8-844c-3aa53fc7af03" Jan 29 15:57:11.961740 containerd[1473]: time="2025-01-29T15:57:11.961629086Z" level=error msg="Failed to destroy network for sandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.962057 containerd[1473]: time="2025-01-29T15:57:11.962029368Z" level=error msg="encountered an error cleaning up failed sandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.962245 containerd[1473]: time="2025-01-29T15:57:11.962153996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.962191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323-shm.mount: Deactivated successfully. Jan 29 15:57:11.962391 kubelet[2572]: E0129 15:57:11.962341 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.962430 kubelet[2572]: E0129 15:57:11.962386 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:11.962430 kubelet[2572]: E0129 15:57:11.962404 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:11.962480 kubelet[2572]: E0129 15:57:11.962445 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bcmc" podUID="ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef" Jan 29 15:57:11.964888 containerd[1473]: time="2025-01-29T15:57:11.964832341Z" level=error msg="Failed to destroy network for sandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.965171 containerd[1473]: time="2025-01-29T15:57:11.965139032Z" level=error msg="encountered an error cleaning up failed sandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.965239 containerd[1473]: time="2025-01-29T15:57:11.965189667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.965524 kubelet[2572]: E0129 15:57:11.965344 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.965524 kubelet[2572]: E0129 15:57:11.965389 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:11.965524 kubelet[2572]: E0129 15:57:11.965403 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:11.965517 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a-shm.mount: Deactivated successfully. Jan 29 15:57:11.965698 kubelet[2572]: E0129 15:57:11.965433 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" podUID="eb0b4103-78c0-4eef-8691-75f802520548" Jan 29 15:57:11.978838 containerd[1473]: time="2025-01-29T15:57:11.978785533Z" level=error msg="Failed to destroy network for sandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.979445 containerd[1473]: time="2025-01-29T15:57:11.979393676Z" level=error msg="encountered an error cleaning up failed sandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.979502 containerd[1473]: time="2025-01-29T15:57:11.979457789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.979984 kubelet[2572]: E0129 15:57:11.979644 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:11.979984 kubelet[2572]: E0129 15:57:11.979701 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:11.979984 kubelet[2572]: E0129 15:57:11.979718 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:11.980265 kubelet[2572]: E0129 15:57:11.979761 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dqj97_calico-system(8d0ca05b-3272-4e38-9a00-746f382615ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dqj97_calico-system(8d0ca05b-3272-4e38-9a00-746f382615ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dqj97" podUID="8d0ca05b-3272-4e38-9a00-746f382615ae" Jan 29 15:57:12.607424 kubelet[2572]: I0129 15:57:12.607393 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a" Jan 29 15:57:12.608527 containerd[1473]: time="2025-01-29T15:57:12.608461651Z" level=info msg="StopPodSandbox for \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\"" Jan 29 15:57:12.609786 containerd[1473]: time="2025-01-29T15:57:12.608642634Z" level=info msg="Ensure that sandbox 2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a in task-service has been cleanup successfully" Jan 29 15:57:12.609786 containerd[1473]: time="2025-01-29T15:57:12.608906290Z" level=info msg="TearDown network for sandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\" successfully" Jan 29 15:57:12.609786 containerd[1473]: time="2025-01-29T15:57:12.608923848Z" level=info msg="StopPodSandbox for \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\" returns successfully" Jan 29 15:57:12.609786 containerd[1473]: time="2025-01-29T15:57:12.609516993Z" level=info msg="StopPodSandbox for \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\"" Jan 29 15:57:12.609786 containerd[1473]: time="2025-01-29T15:57:12.609621944Z" level=info msg="TearDown network for sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\" successfully" Jan 29 15:57:12.609786 containerd[1473]: time="2025-01-29T15:57:12.609635342Z" level=info msg="StopPodSandbox for \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\" returns successfully" Jan 29 15:57:12.609994 containerd[1473]: time="2025-01-29T15:57:12.609879560Z" level=info msg="StopPodSandbox for \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\"" Jan 29 15:57:12.609994 containerd[1473]: time="2025-01-29T15:57:12.609968032Z" level=info msg="TearDown network for sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" successfully" Jan 29 15:57:12.609994 containerd[1473]: time="2025-01-29T15:57:12.609978191Z" level=info msg="StopPodSandbox for \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" returns successfully" Jan 29 15:57:12.610352 containerd[1473]: time="2025-01-29T15:57:12.610324559Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\"" Jan 29 15:57:12.610431 containerd[1473]: time="2025-01-29T15:57:12.610415391Z" level=info msg="TearDown network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" successfully" Jan 29 15:57:12.610463 containerd[1473]: time="2025-01-29T15:57:12.610429709Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" returns successfully" Jan 29 15:57:12.611030 containerd[1473]: time="2025-01-29T15:57:12.610927183Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\"" Jan 29 15:57:12.611030 containerd[1473]: time="2025-01-29T15:57:12.611014175Z" level=info msg="TearDown network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" successfully" Jan 29 15:57:12.611030 containerd[1473]: time="2025-01-29T15:57:12.611026094Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" returns successfully" Jan 29 15:57:12.611798 kubelet[2572]: E0129 15:57:12.611765 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:12.612260 kubelet[2572]: I0129 15:57:12.612016 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805" Jan 29 15:57:12.612320 containerd[1473]: time="2025-01-29T15:57:12.612062879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:5,}" Jan 29 15:57:12.613023 containerd[1473]: time="2025-01-29T15:57:12.612994073Z" level=info msg="StopPodSandbox for \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\"" Jan 29 15:57:12.613184 containerd[1473]: time="2025-01-29T15:57:12.613147299Z" level=info msg="Ensure that sandbox e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805 in task-service has been cleanup successfully" Jan 29 15:57:12.613413 containerd[1473]: time="2025-01-29T15:57:12.613354240Z" level=info msg="TearDown network for sandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\" successfully" Jan 29 15:57:12.613413 containerd[1473]: time="2025-01-29T15:57:12.613373678Z" level=info msg="StopPodSandbox for \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\" returns successfully" Jan 29 15:57:12.613809 containerd[1473]: time="2025-01-29T15:57:12.613704407Z" level=info msg="StopPodSandbox for \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\"" Jan 29 15:57:12.614287 containerd[1473]: time="2025-01-29T15:57:12.614260636Z" level=info msg="TearDown network for sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\" successfully" Jan 29 15:57:12.614287 containerd[1473]: time="2025-01-29T15:57:12.614284794Z" level=info msg="StopPodSandbox for \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\" returns successfully" Jan 29 15:57:12.615130 containerd[1473]: time="2025-01-29T15:57:12.614914736Z" level=info msg="StopPodSandbox for \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\"" Jan 29 15:57:12.615130 containerd[1473]: time="2025-01-29T15:57:12.615051203Z" level=info msg="TearDown network for sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" successfully" Jan 29 15:57:12.615130 containerd[1473]: time="2025-01-29T15:57:12.615064522Z" level=info msg="StopPodSandbox for \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" returns successfully" Jan 29 15:57:12.615754 containerd[1473]: time="2025-01-29T15:57:12.615479084Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\"" Jan 29 15:57:12.615754 containerd[1473]: time="2025-01-29T15:57:12.615558956Z" level=info msg="TearDown network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" successfully" Jan 29 15:57:12.615754 containerd[1473]: time="2025-01-29T15:57:12.615568795Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" returns successfully" Jan 29 15:57:12.616229 containerd[1473]: time="2025-01-29T15:57:12.616017834Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\"" Jan 29 15:57:12.616229 containerd[1473]: time="2025-01-29T15:57:12.616106906Z" level=info msg="TearDown network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" successfully" Jan 29 15:57:12.616229 containerd[1473]: time="2025-01-29T15:57:12.616117385Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" returns successfully" Jan 29 15:57:12.616535 kubelet[2572]: I0129 15:57:12.616509 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d" Jan 29 15:57:12.617384 containerd[1473]: time="2025-01-29T15:57:12.616936589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:5,}" Jan 29 15:57:12.617384 containerd[1473]: time="2025-01-29T15:57:12.617307395Z" level=info msg="StopPodSandbox for \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\"" Jan 29 15:57:12.618603 containerd[1473]: time="2025-01-29T15:57:12.618554760Z" level=info msg="Ensure that sandbox 77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d in task-service has been cleanup successfully" Jan 29 15:57:12.619873 containerd[1473]: time="2025-01-29T15:57:12.619811724Z" level=info msg="TearDown network for sandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\" successfully" Jan 29 15:57:12.619873 containerd[1473]: time="2025-01-29T15:57:12.619836522Z" level=info msg="StopPodSandbox for \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\" returns successfully" Jan 29 15:57:12.634528 containerd[1473]: time="2025-01-29T15:57:12.634487651Z" level=info msg="StopPodSandbox for \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\"" Jan 29 15:57:12.634889 containerd[1473]: time="2025-01-29T15:57:12.634859297Z" level=info msg="TearDown network for sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\" successfully" Jan 29 15:57:12.635082 containerd[1473]: time="2025-01-29T15:57:12.635055839Z" level=info msg="StopPodSandbox for \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\" returns successfully" Jan 29 15:57:12.635444 containerd[1473]: time="2025-01-29T15:57:12.635417125Z" level=info msg="StopPodSandbox for \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\"" Jan 29 15:57:12.635941 containerd[1473]: time="2025-01-29T15:57:12.635491959Z" level=info msg="TearDown network for sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" successfully" Jan 29 15:57:12.635941 containerd[1473]: time="2025-01-29T15:57:12.635501438Z" level=info msg="StopPodSandbox for \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" returns successfully" Jan 29 15:57:12.636016 kubelet[2572]: I0129 15:57:12.635667 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da" Jan 29 15:57:12.636528 containerd[1473]: time="2025-01-29T15:57:12.636225851Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\"" Jan 29 15:57:12.636528 containerd[1473]: time="2025-01-29T15:57:12.636295244Z" level=info msg="StopPodSandbox for \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\"" Jan 29 15:57:12.636528 containerd[1473]: time="2025-01-29T15:57:12.636298204Z" level=info msg="TearDown network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" successfully" Jan 29 15:57:12.636528 containerd[1473]: time="2025-01-29T15:57:12.636434432Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" returns successfully" Jan 29 15:57:12.636528 containerd[1473]: time="2025-01-29T15:57:12.636453550Z" level=info msg="Ensure that sandbox cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da in task-service has been cleanup successfully" Jan 29 15:57:12.636707 containerd[1473]: time="2025-01-29T15:57:12.636623454Z" level=info msg="TearDown network for sandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\" successfully" Jan 29 15:57:12.636707 containerd[1473]: time="2025-01-29T15:57:12.636637013Z" level=info msg="StopPodSandbox for \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\" returns successfully" Jan 29 15:57:12.636751 containerd[1473]: time="2025-01-29T15:57:12.636738164Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\"" Jan 29 15:57:12.637234 containerd[1473]: time="2025-01-29T15:57:12.636815996Z" level=info msg="TearDown network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" successfully" Jan 29 15:57:12.637234 containerd[1473]: time="2025-01-29T15:57:12.636834075Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" returns successfully" Jan 29 15:57:12.637320 kubelet[2572]: E0129 15:57:12.637105 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:12.637720 containerd[1473]: time="2025-01-29T15:57:12.637472936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:5,}" Jan 29 15:57:12.638148 containerd[1473]: time="2025-01-29T15:57:12.638119836Z" level=info msg="StopPodSandbox for \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\"" Jan 29 15:57:12.638212 containerd[1473]: time="2025-01-29T15:57:12.638199389Z" level=info msg="TearDown network for sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\" successfully" Jan 29 15:57:12.638237 containerd[1473]: time="2025-01-29T15:57:12.638212988Z" level=info msg="StopPodSandbox for \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\" returns successfully" Jan 29 15:57:12.638506 containerd[1473]: time="2025-01-29T15:57:12.638452246Z" level=info msg="StopPodSandbox for \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\"" Jan 29 15:57:12.638552 containerd[1473]: time="2025-01-29T15:57:12.638533918Z" level=info msg="TearDown network for sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" successfully" Jan 29 15:57:12.638552 containerd[1473]: time="2025-01-29T15:57:12.638544677Z" level=info msg="StopPodSandbox for \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" returns successfully" Jan 29 15:57:12.639052 containerd[1473]: time="2025-01-29T15:57:12.639025793Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\"" Jan 29 15:57:12.639129 containerd[1473]: time="2025-01-29T15:57:12.639112585Z" level=info msg="TearDown network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" successfully" Jan 29 15:57:12.639129 containerd[1473]: time="2025-01-29T15:57:12.639126983Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" returns successfully" Jan 29 15:57:12.639341 kubelet[2572]: I0129 15:57:12.639318 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70" Jan 29 15:57:12.639917 containerd[1473]: time="2025-01-29T15:57:12.639788042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:4,}" Jan 29 15:57:12.640354 containerd[1473]: time="2025-01-29T15:57:12.640088255Z" level=info msg="StopPodSandbox for \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\"" Jan 29 15:57:12.640354 containerd[1473]: time="2025-01-29T15:57:12.640227202Z" level=info msg="Ensure that sandbox d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70 in task-service has been cleanup successfully" Jan 29 15:57:12.641017 containerd[1473]: time="2025-01-29T15:57:12.640851104Z" level=info msg="TearDown network for sandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\" successfully" Jan 29 15:57:12.641341 containerd[1473]: time="2025-01-29T15:57:12.641259267Z" level=info msg="StopPodSandbox for \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\" returns successfully" Jan 29 15:57:12.641550 containerd[1473]: time="2025-01-29T15:57:12.641526282Z" level=info msg="StopPodSandbox for \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\"" Jan 29 15:57:12.641755 containerd[1473]: time="2025-01-29T15:57:12.641638632Z" level=info msg="TearDown network for sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\" successfully" Jan 29 15:57:12.641755 containerd[1473]: time="2025-01-29T15:57:12.641650711Z" level=info msg="StopPodSandbox for \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\" returns successfully" Jan 29 15:57:12.642022 containerd[1473]: time="2025-01-29T15:57:12.641967002Z" level=info msg="StopPodSandbox for \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\"" Jan 29 15:57:12.642306 containerd[1473]: time="2025-01-29T15:57:12.642200780Z" level=info msg="TearDown network for sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" successfully" Jan 29 15:57:12.642697 kubelet[2572]: I0129 15:57:12.642439 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323" Jan 29 15:57:12.642847 containerd[1473]: time="2025-01-29T15:57:12.642758169Z" level=info msg="StopPodSandbox for \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" returns successfully" Jan 29 15:57:12.643282 containerd[1473]: time="2025-01-29T15:57:12.643132054Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\"" Jan 29 15:57:12.643282 containerd[1473]: time="2025-01-29T15:57:12.643210047Z" level=info msg="TearDown network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" successfully" Jan 29 15:57:12.643282 containerd[1473]: time="2025-01-29T15:57:12.643220166Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" returns successfully" Jan 29 15:57:12.643486 containerd[1473]: time="2025-01-29T15:57:12.643455344Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\"" Jan 29 15:57:12.643573 containerd[1473]: time="2025-01-29T15:57:12.643543416Z" level=info msg="TearDown network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" successfully" Jan 29 15:57:12.643573 containerd[1473]: time="2025-01-29T15:57:12.643557255Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" returns successfully" Jan 29 15:57:12.644064 containerd[1473]: time="2025-01-29T15:57:12.644019772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:5,}" Jan 29 15:57:12.644110 containerd[1473]: time="2025-01-29T15:57:12.644083646Z" level=info msg="StopPodSandbox for \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\"" Jan 29 15:57:12.644270 containerd[1473]: time="2025-01-29T15:57:12.644242512Z" level=info msg="Ensure that sandbox 2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323 in task-service has been cleanup successfully" Jan 29 15:57:12.644388 containerd[1473]: time="2025-01-29T15:57:12.644366060Z" level=info msg="TearDown network for sandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\" successfully" Jan 29 15:57:12.644421 containerd[1473]: time="2025-01-29T15:57:12.644385659Z" level=info msg="StopPodSandbox for \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\" returns successfully" Jan 29 15:57:12.644725 containerd[1473]: time="2025-01-29T15:57:12.644684071Z" level=info msg="StopPodSandbox for \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\"" Jan 29 15:57:12.644845 containerd[1473]: time="2025-01-29T15:57:12.644823778Z" level=info msg="TearDown network for sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\" successfully" Jan 29 15:57:12.644845 containerd[1473]: time="2025-01-29T15:57:12.644840697Z" level=info msg="StopPodSandbox for \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\" returns successfully" Jan 29 15:57:12.645173 containerd[1473]: time="2025-01-29T15:57:12.645142589Z" level=info msg="StopPodSandbox for \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\"" Jan 29 15:57:12.645246 containerd[1473]: time="2025-01-29T15:57:12.645231980Z" level=info msg="TearDown network for sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" successfully" Jan 29 15:57:12.645272 containerd[1473]: time="2025-01-29T15:57:12.645246139Z" level=info msg="StopPodSandbox for \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" returns successfully" Jan 29 15:57:12.645506 containerd[1473]: time="2025-01-29T15:57:12.645486117Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\"" Jan 29 15:57:12.645578 containerd[1473]: time="2025-01-29T15:57:12.645564670Z" level=info msg="TearDown network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" successfully" Jan 29 15:57:12.645608 containerd[1473]: time="2025-01-29T15:57:12.645578429Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" returns successfully" Jan 29 15:57:12.645868 containerd[1473]: time="2025-01-29T15:57:12.645844244Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\"" Jan 29 15:57:12.645947 containerd[1473]: time="2025-01-29T15:57:12.645932156Z" level=info msg="TearDown network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" successfully" Jan 29 15:57:12.645970 containerd[1473]: time="2025-01-29T15:57:12.645945955Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" returns successfully" Jan 29 15:57:12.646406 containerd[1473]: time="2025-01-29T15:57:12.646370596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:5,}" Jan 29 15:57:12.793335 containerd[1473]: time="2025-01-29T15:57:12.793288690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:12.802483 containerd[1473]: time="2025-01-29T15:57:12.802440766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 15:57:12.812165 containerd[1473]: time="2025-01-29T15:57:12.812099636Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:12.830251 containerd[1473]: time="2025-01-29T15:57:12.830186288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:12.831710 containerd[1473]: time="2025-01-29T15:57:12.831639074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.297537511s" Jan 29 15:57:12.831710 containerd[1473]: time="2025-01-29T15:57:12.831672911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 15:57:12.843189 containerd[1473]: time="2025-01-29T15:57:12.842938832Z" level=info msg="CreateContainer within sandbox \"68d1b1c98e11ab782e109c092538c6c4f94a8e1be49af04b59b5f133a7fdb0b1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 15:57:12.882962 systemd[1]: run-netns-cni\x2d74f41e14\x2dc4a6\x2d51b5\x2d4386\x2d1712e7cd8623.mount: Deactivated successfully. Jan 29 15:57:12.883057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805-shm.mount: Deactivated successfully. Jan 29 15:57:12.883108 systemd[1]: run-netns-cni\x2d56160903\x2d795b\x2d604d\x2d0470\x2d158150741094.mount: Deactivated successfully. Jan 29 15:57:12.883152 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da-shm.mount: Deactivated successfully. Jan 29 15:57:12.883197 systemd[1]: run-netns-cni\x2dc3b16c7e\x2d724a\x2deb9c\x2d8bca\x2d9087be0c25ad.mount: Deactivated successfully. Jan 29 15:57:12.883243 systemd[1]: run-netns-cni\x2d70bba433\x2d5ca2\x2ddd7d\x2d0777\x2d72a6bfeff027.mount: Deactivated successfully. Jan 29 15:57:12.883287 systemd[1]: run-netns-cni\x2d5ad6fcfd\x2df5ca\x2d5d25\x2dd8b7\x2dc4e1176a8562.mount: Deactivated successfully. Jan 29 15:57:12.883329 systemd[1]: run-netns-cni\x2dc1845ec5\x2d45bc\x2dde25\x2d2a5d\x2dda0384569470.mount: Deactivated successfully. Jan 29 15:57:12.883372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1121892889.mount: Deactivated successfully. Jan 29 15:57:12.891599 containerd[1473]: time="2025-01-29T15:57:12.891536192Z" level=info msg="CreateContainer within sandbox \"68d1b1c98e11ab782e109c092538c6c4f94a8e1be49af04b59b5f133a7fdb0b1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ca8f870705c5d1d1f3a00acf563d0925a97bc2a85de4a1486dfc184a208d34b0\"" Jan 29 15:57:12.893815 containerd[1473]: time="2025-01-29T15:57:12.893410739Z" level=info msg="StartContainer for \"ca8f870705c5d1d1f3a00acf563d0925a97bc2a85de4a1486dfc184a208d34b0\"" Jan 29 15:57:12.919139 containerd[1473]: time="2025-01-29T15:57:12.919083212Z" level=error msg="Failed to destroy network for sandbox \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.919989 containerd[1473]: time="2025-01-29T15:57:12.919954892Z" level=error msg="encountered an error cleaning up failed sandbox \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.920047 containerd[1473]: time="2025-01-29T15:57:12.920016246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.920549 kubelet[2572]: E0129 15:57:12.920221 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.920549 kubelet[2572]: E0129 15:57:12.920275 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:12.920549 kubelet[2572]: E0129 15:57:12.920294 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" Jan 29 15:57:12.920697 kubelet[2572]: E0129 15:57:12.920329 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596fc4546b-ddndj_calico-system(6b44de13-eff2-42e8-844c-3aa53fc7af03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" podUID="6b44de13-eff2-42e8-844c-3aa53fc7af03" Jan 29 15:57:12.921462 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3-shm.mount: Deactivated successfully. Jan 29 15:57:12.924218 containerd[1473]: time="2025-01-29T15:57:12.924162144Z" level=error msg="Failed to destroy network for sandbox \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.924496 containerd[1473]: time="2025-01-29T15:57:12.924460196Z" level=error msg="encountered an error cleaning up failed sandbox \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.924907 containerd[1473]: time="2025-01-29T15:57:12.924514591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.924948 kubelet[2572]: E0129 15:57:12.924726 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.924948 kubelet[2572]: E0129 15:57:12.924771 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:12.924948 kubelet[2572]: E0129 15:57:12.924791 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dqj97" Jan 29 15:57:12.925032 kubelet[2572]: E0129 15:57:12.924825 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dqj97_calico-system(8d0ca05b-3272-4e38-9a00-746f382615ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dqj97_calico-system(8d0ca05b-3272-4e38-9a00-746f382615ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dqj97" podUID="8d0ca05b-3272-4e38-9a00-746f382615ae" Jan 29 15:57:12.928714 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8-shm.mount: Deactivated successfully. Jan 29 15:57:12.937399 containerd[1473]: time="2025-01-29T15:57:12.937278614Z" level=error msg="Failed to destroy network for sandbox \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.937773 containerd[1473]: time="2025-01-29T15:57:12.937744811Z" level=error msg="encountered an error cleaning up failed sandbox \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.937895 containerd[1473]: time="2025-01-29T15:57:12.937871560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.938455 kubelet[2572]: E0129 15:57:12.938143 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.938455 kubelet[2572]: E0129 15:57:12.938196 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:12.938455 kubelet[2572]: E0129 15:57:12.938212 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x5c64" Jan 29 15:57:12.938581 kubelet[2572]: E0129 15:57:12.938247 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x5c64_kube-system(e13e233f-36bf-4ccc-9393-e6e06b49a20a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x5c64" podUID="e13e233f-36bf-4ccc-9393-e6e06b49a20a" Jan 29 15:57:12.940630 containerd[1473]: time="2025-01-29T15:57:12.940597388Z" level=error msg="Failed to destroy network for sandbox \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.940843 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677-shm.mount: Deactivated successfully. Jan 29 15:57:12.941579 containerd[1473]: time="2025-01-29T15:57:12.941364398Z" level=error msg="encountered an error cleaning up failed sandbox \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.941702 containerd[1473]: time="2025-01-29T15:57:12.941671689Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.942059 kubelet[2572]: E0129 15:57:12.941842 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.942290 kubelet[2572]: E0129 15:57:12.942166 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:12.942290 kubelet[2572]: E0129 15:57:12.942189 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8bcmc" Jan 29 15:57:12.942290 kubelet[2572]: E0129 15:57:12.942242 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8bcmc_kube-system(ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8bcmc" podUID="ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef" Jan 29 15:57:12.944003 containerd[1473]: time="2025-01-29T15:57:12.943963318Z" level=error msg="Failed to destroy network for sandbox \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.944347 containerd[1473]: time="2025-01-29T15:57:12.944248692Z" level=error msg="encountered an error cleaning up failed sandbox \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.944347 containerd[1473]: time="2025-01-29T15:57:12.944304287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.944919 kubelet[2572]: E0129 15:57:12.944469 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.944919 kubelet[2572]: E0129 15:57:12.944511 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:12.944919 kubelet[2572]: E0129 15:57:12.944530 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" Jan 29 15:57:12.945021 kubelet[2572]: E0129 15:57:12.944557 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-pstm9_calico-apiserver(eb0b4103-78c0-4eef-8691-75f802520548)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" podUID="eb0b4103-78c0-4eef-8691-75f802520548" Jan 29 15:57:12.952344 containerd[1473]: time="2025-01-29T15:57:12.952240435Z" level=error msg="Failed to destroy network for sandbox \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.952868 containerd[1473]: time="2025-01-29T15:57:12.952709152Z" level=error msg="encountered an error cleaning up failed sandbox \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.952868 containerd[1473]: time="2025-01-29T15:57:12.952760387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.953075 kubelet[2572]: E0129 15:57:12.953049 2572 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 15:57:12.953165 kubelet[2572]: E0129 15:57:12.953147 2572 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:12.953344 kubelet[2572]: E0129 15:57:12.953210 2572 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" Jan 29 15:57:12.953344 kubelet[2572]: E0129 15:57:12.953254 2572 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd9769987-jfw4v_calico-apiserver(cc63fcad-580a-4790-b92e-85d54cee6129)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" podUID="cc63fcad-580a-4790-b92e-85d54cee6129" Jan 29 15:57:12.969747 systemd[1]: Started cri-containerd-ca8f870705c5d1d1f3a00acf563d0925a97bc2a85de4a1486dfc184a208d34b0.scope - libcontainer container ca8f870705c5d1d1f3a00acf563d0925a97bc2a85de4a1486dfc184a208d34b0. Jan 29 15:57:13.001969 containerd[1473]: time="2025-01-29T15:57:13.001931534Z" level=info msg="StartContainer for \"ca8f870705c5d1d1f3a00acf563d0925a97bc2a85de4a1486dfc184a208d34b0\" returns successfully" Jan 29 15:57:13.181837 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 15:57:13.181968 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 15:57:13.573188 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:43930.service - OpenSSH per-connection server daemon (10.0.0.1:43930). Jan 29 15:57:13.621638 sshd[4743]: Accepted publickey for core from 10.0.0.1 port 43930 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:13.622839 sshd-session[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:13.626750 systemd-logind[1458]: New session 8 of user core. Jan 29 15:57:13.637736 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 15:57:13.647918 kubelet[2572]: E0129 15:57:13.647892 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:13.651765 kubelet[2572]: I0129 15:57:13.651735 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9" Jan 29 15:57:13.653349 containerd[1473]: time="2025-01-29T15:57:13.653302876Z" level=info msg="StopPodSandbox for \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\"" Jan 29 15:57:13.653617 containerd[1473]: time="2025-01-29T15:57:13.653484180Z" level=info msg="Ensure that sandbox aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9 in task-service has been cleanup successfully" Jan 29 15:57:13.653885 containerd[1473]: time="2025-01-29T15:57:13.653698921Z" level=info msg="TearDown network for sandbox \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\" successfully" Jan 29 15:57:13.653885 containerd[1473]: time="2025-01-29T15:57:13.653733558Z" level=info msg="StopPodSandbox for \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\" returns successfully" Jan 29 15:57:13.654675 containerd[1473]: time="2025-01-29T15:57:13.654508368Z" level=info msg="StopPodSandbox for \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\"" Jan 29 15:57:13.654675 containerd[1473]: time="2025-01-29T15:57:13.654616199Z" level=info msg="TearDown network for sandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\" successfully" Jan 29 15:57:13.654675 containerd[1473]: time="2025-01-29T15:57:13.654627198Z" level=info msg="StopPodSandbox for \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\" returns successfully" Jan 29 15:57:13.654930 containerd[1473]: time="2025-01-29T15:57:13.654906893Z" level=info msg="StopPodSandbox for \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\"" Jan 29 15:57:13.655043 containerd[1473]: time="2025-01-29T15:57:13.655028242Z" level=info msg="TearDown network for sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\" successfully" Jan 29 15:57:13.655201 containerd[1473]: time="2025-01-29T15:57:13.655081557Z" level=info msg="StopPodSandbox for \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\" returns successfully" Jan 29 15:57:13.655507 containerd[1473]: time="2025-01-29T15:57:13.655393209Z" level=info msg="StopPodSandbox for \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\"" Jan 29 15:57:13.655507 containerd[1473]: time="2025-01-29T15:57:13.655459244Z" level=info msg="TearDown network for sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" successfully" Jan 29 15:57:13.655507 containerd[1473]: time="2025-01-29T15:57:13.655469723Z" level=info msg="StopPodSandbox for \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" returns successfully" Jan 29 15:57:13.655965 containerd[1473]: time="2025-01-29T15:57:13.655938601Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\"" Jan 29 15:57:13.656037 containerd[1473]: time="2025-01-29T15:57:13.656018474Z" level=info msg="TearDown network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" successfully" Jan 29 15:57:13.656037 containerd[1473]: time="2025-01-29T15:57:13.656029313Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" returns successfully" Jan 29 15:57:13.656478 kubelet[2572]: I0129 15:57:13.656453 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677" Jan 29 15:57:13.656804 containerd[1473]: time="2025-01-29T15:57:13.656655217Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\"" Jan 29 15:57:13.656804 containerd[1473]: time="2025-01-29T15:57:13.656731650Z" level=info msg="TearDown network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" successfully" Jan 29 15:57:13.656804 containerd[1473]: time="2025-01-29T15:57:13.656740609Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" returns successfully" Jan 29 15:57:13.657073 containerd[1473]: time="2025-01-29T15:57:13.657047742Z" level=info msg="StopPodSandbox for \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\"" Jan 29 15:57:13.657209 containerd[1473]: time="2025-01-29T15:57:13.657190449Z" level=info msg="Ensure that sandbox 96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677 in task-service has been cleanup successfully" Jan 29 15:57:13.657752 containerd[1473]: time="2025-01-29T15:57:13.657727801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:6,}" Jan 29 15:57:13.658096 containerd[1473]: time="2025-01-29T15:57:13.657790435Z" level=info msg="TearDown network for sandbox \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\" successfully" Jan 29 15:57:13.658096 containerd[1473]: time="2025-01-29T15:57:13.657936702Z" level=info msg="StopPodSandbox for \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\" returns successfully" Jan 29 15:57:13.658676 containerd[1473]: time="2025-01-29T15:57:13.658644719Z" level=info msg="StopPodSandbox for \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\"" Jan 29 15:57:13.659126 containerd[1473]: time="2025-01-29T15:57:13.658758989Z" level=info msg="TearDown network for sandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\" successfully" Jan 29 15:57:13.659126 containerd[1473]: time="2025-01-29T15:57:13.658774187Z" level=info msg="StopPodSandbox for \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\" returns successfully" Jan 29 15:57:13.659753 containerd[1473]: time="2025-01-29T15:57:13.659448207Z" level=info msg="StopPodSandbox for \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\"" Jan 29 15:57:13.659753 containerd[1473]: time="2025-01-29T15:57:13.659526200Z" level=info msg="TearDown network for sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\" successfully" Jan 29 15:57:13.659753 containerd[1473]: time="2025-01-29T15:57:13.659538199Z" level=info msg="StopPodSandbox for \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\" returns successfully" Jan 29 15:57:13.660851 containerd[1473]: time="2025-01-29T15:57:13.660815285Z" level=info msg="StopPodSandbox for \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\"" Jan 29 15:57:13.660925 containerd[1473]: time="2025-01-29T15:57:13.660897678Z" level=info msg="TearDown network for sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" successfully" Jan 29 15:57:13.660925 containerd[1473]: time="2025-01-29T15:57:13.660908117Z" level=info msg="StopPodSandbox for \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" returns successfully" Jan 29 15:57:13.661239 containerd[1473]: time="2025-01-29T15:57:13.661210730Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\"" Jan 29 15:57:13.661888 containerd[1473]: time="2025-01-29T15:57:13.661861032Z" level=info msg="TearDown network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" successfully" Jan 29 15:57:13.661888 containerd[1473]: time="2025-01-29T15:57:13.661885590Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" returns successfully" Jan 29 15:57:13.663632 kubelet[2572]: I0129 15:57:13.663606 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e" Jan 29 15:57:13.664445 containerd[1473]: time="2025-01-29T15:57:13.664409044Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\"" Jan 29 15:57:13.664509 containerd[1473]: time="2025-01-29T15:57:13.664496556Z" level=info msg="TearDown network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" successfully" Jan 29 15:57:13.664509 containerd[1473]: time="2025-01-29T15:57:13.664507115Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" returns successfully" Jan 29 15:57:13.664702 containerd[1473]: time="2025-01-29T15:57:13.664675620Z" level=info msg="StopPodSandbox for \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\"" Jan 29 15:57:13.664839 containerd[1473]: time="2025-01-29T15:57:13.664808968Z" level=info msg="Ensure that sandbox 3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e in task-service has been cleanup successfully" Jan 29 15:57:13.664994 containerd[1473]: time="2025-01-29T15:57:13.664970194Z" level=info msg="TearDown network for sandbox \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\" successfully" Jan 29 15:57:13.664994 containerd[1473]: time="2025-01-29T15:57:13.664989232Z" level=info msg="StopPodSandbox for \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\" returns successfully" Jan 29 15:57:13.665045 kubelet[2572]: E0129 15:57:13.664974 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:13.665276 containerd[1473]: time="2025-01-29T15:57:13.665237850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:6,}" Jan 29 15:57:13.666038 containerd[1473]: time="2025-01-29T15:57:13.666005662Z" level=info msg="StopPodSandbox for \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\"" Jan 29 15:57:13.666091 containerd[1473]: time="2025-01-29T15:57:13.666080575Z" level=info msg="TearDown network for sandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\" successfully" Jan 29 15:57:13.666112 containerd[1473]: time="2025-01-29T15:57:13.666092134Z" level=info msg="StopPodSandbox for \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\" returns successfully" Jan 29 15:57:13.666400 containerd[1473]: time="2025-01-29T15:57:13.666319754Z" level=info msg="StopPodSandbox for \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\"" Jan 29 15:57:13.666400 containerd[1473]: time="2025-01-29T15:57:13.666380788Z" level=info msg="TearDown network for sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\" successfully" Jan 29 15:57:13.666400 containerd[1473]: time="2025-01-29T15:57:13.666389787Z" level=info msg="StopPodSandbox for \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\" returns successfully" Jan 29 15:57:13.667566 containerd[1473]: time="2025-01-29T15:57:13.666984534Z" level=info msg="StopPodSandbox for \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\"" Jan 29 15:57:13.667566 containerd[1473]: time="2025-01-29T15:57:13.667067447Z" level=info msg="TearDown network for sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" successfully" Jan 29 15:57:13.667566 containerd[1473]: time="2025-01-29T15:57:13.667080086Z" level=info msg="StopPodSandbox for \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" returns successfully" Jan 29 15:57:13.667566 containerd[1473]: time="2025-01-29T15:57:13.667538565Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\"" Jan 29 15:57:13.667708 containerd[1473]: time="2025-01-29T15:57:13.667634796Z" level=info msg="TearDown network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" successfully" Jan 29 15:57:13.667708 containerd[1473]: time="2025-01-29T15:57:13.667647435Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" returns successfully" Jan 29 15:57:13.668136 containerd[1473]: time="2025-01-29T15:57:13.668024961Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\"" Jan 29 15:57:13.668136 containerd[1473]: time="2025-01-29T15:57:13.668103834Z" level=info msg="TearDown network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" successfully" Jan 29 15:57:13.668136 containerd[1473]: time="2025-01-29T15:57:13.668114753Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" returns successfully" Jan 29 15:57:13.669407 kubelet[2572]: E0129 15:57:13.669387 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:13.669866 containerd[1473]: time="2025-01-29T15:57:13.669838319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:6,}" Jan 29 15:57:13.670196 kubelet[2572]: I0129 15:57:13.670173 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8" Jan 29 15:57:13.671619 containerd[1473]: time="2025-01-29T15:57:13.671494491Z" level=info msg="StopPodSandbox for \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\"" Jan 29 15:57:13.671740 containerd[1473]: time="2025-01-29T15:57:13.671720391Z" level=info msg="Ensure that sandbox 29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8 in task-service has been cleanup successfully" Jan 29 15:57:13.671947 containerd[1473]: time="2025-01-29T15:57:13.671929292Z" level=info msg="TearDown network for sandbox \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\" successfully" Jan 29 15:57:13.672039 containerd[1473]: time="2025-01-29T15:57:13.671995287Z" level=info msg="StopPodSandbox for \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\" returns successfully" Jan 29 15:57:13.672314 containerd[1473]: time="2025-01-29T15:57:13.672283141Z" level=info msg="StopPodSandbox for \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\"" Jan 29 15:57:13.672368 containerd[1473]: time="2025-01-29T15:57:13.672356174Z" level=info msg="TearDown network for sandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\" successfully" Jan 29 15:57:13.672389 containerd[1473]: time="2025-01-29T15:57:13.672367093Z" level=info msg="StopPodSandbox for \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\" returns successfully" Jan 29 15:57:13.672934 containerd[1473]: time="2025-01-29T15:57:13.672899246Z" level=info msg="StopPodSandbox for \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\"" Jan 29 15:57:13.672986 containerd[1473]: time="2025-01-29T15:57:13.672969920Z" level=info msg="TearDown network for sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\" successfully" Jan 29 15:57:13.673007 containerd[1473]: time="2025-01-29T15:57:13.672990798Z" level=info msg="StopPodSandbox for \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\" returns successfully" Jan 29 15:57:13.674235 containerd[1473]: time="2025-01-29T15:57:13.674198050Z" level=info msg="StopPodSandbox for \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\"" Jan 29 15:57:13.674377 containerd[1473]: time="2025-01-29T15:57:13.674276563Z" level=info msg="TearDown network for sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" successfully" Jan 29 15:57:13.674377 containerd[1473]: time="2025-01-29T15:57:13.674288202Z" level=info msg="StopPodSandbox for \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" returns successfully" Jan 29 15:57:13.675143 containerd[1473]: time="2025-01-29T15:57:13.675090770Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\"" Jan 29 15:57:13.675213 containerd[1473]: time="2025-01-29T15:57:13.675198720Z" level=info msg="TearDown network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" successfully" Jan 29 15:57:13.675240 containerd[1473]: time="2025-01-29T15:57:13.675212799Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" returns successfully" Jan 29 15:57:13.675943 kubelet[2572]: I0129 15:57:13.675901 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa" Jan 29 15:57:13.678090 containerd[1473]: time="2025-01-29T15:57:13.677888960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:5,}" Jan 29 15:57:13.678090 containerd[1473]: time="2025-01-29T15:57:13.677918078Z" level=info msg="StopPodSandbox for \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\"" Jan 29 15:57:13.678600 containerd[1473]: time="2025-01-29T15:57:13.678556861Z" level=info msg="Ensure that sandbox c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa in task-service has been cleanup successfully" Jan 29 15:57:13.679032 containerd[1473]: time="2025-01-29T15:57:13.679007900Z" level=info msg="TearDown network for sandbox \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\" successfully" Jan 29 15:57:13.679032 containerd[1473]: time="2025-01-29T15:57:13.679030458Z" level=info msg="StopPodSandbox for \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\" returns successfully" Jan 29 15:57:13.680145 containerd[1473]: time="2025-01-29T15:57:13.679469179Z" level=info msg="StopPodSandbox for \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\"" Jan 29 15:57:13.680145 containerd[1473]: time="2025-01-29T15:57:13.679598368Z" level=info msg="TearDown network for sandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\" successfully" Jan 29 15:57:13.680145 containerd[1473]: time="2025-01-29T15:57:13.679609766Z" level=info msg="StopPodSandbox for \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\" returns successfully" Jan 29 15:57:13.680523 containerd[1473]: time="2025-01-29T15:57:13.680326702Z" level=info msg="StopPodSandbox for \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\"" Jan 29 15:57:13.680568 containerd[1473]: time="2025-01-29T15:57:13.680420374Z" level=info msg="TearDown network for sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\" successfully" Jan 29 15:57:13.680655 containerd[1473]: time="2025-01-29T15:57:13.680638955Z" level=info msg="StopPodSandbox for \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\" returns successfully" Jan 29 15:57:13.681522 containerd[1473]: time="2025-01-29T15:57:13.681496238Z" level=info msg="StopPodSandbox for \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\"" Jan 29 15:57:13.681683 containerd[1473]: time="2025-01-29T15:57:13.681649464Z" level=info msg="TearDown network for sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" successfully" Jan 29 15:57:13.681799 containerd[1473]: time="2025-01-29T15:57:13.681666583Z" level=info msg="StopPodSandbox for \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" returns successfully" Jan 29 15:57:13.682177 containerd[1473]: time="2025-01-29T15:57:13.682033910Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\"" Jan 29 15:57:13.682458 containerd[1473]: time="2025-01-29T15:57:13.682138021Z" level=info msg="TearDown network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" successfully" Jan 29 15:57:13.682505 containerd[1473]: time="2025-01-29T15:57:13.682452553Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" returns successfully" Jan 29 15:57:13.684105 containerd[1473]: time="2025-01-29T15:57:13.684079047Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\"" Jan 29 15:57:13.686043 containerd[1473]: time="2025-01-29T15:57:13.684551245Z" level=info msg="TearDown network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" successfully" Jan 29 15:57:13.686238 containerd[1473]: time="2025-01-29T15:57:13.686118225Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" returns successfully" Jan 29 15:57:13.686847 containerd[1473]: time="2025-01-29T15:57:13.686630739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:6,}" Jan 29 15:57:13.705677 kubelet[2572]: I0129 15:57:13.705641 2572 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3" Jan 29 15:57:13.706366 containerd[1473]: time="2025-01-29T15:57:13.706294983Z" level=info msg="StopPodSandbox for \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\"" Jan 29 15:57:13.706600 containerd[1473]: time="2025-01-29T15:57:13.706503044Z" level=info msg="Ensure that sandbox 64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3 in task-service has been cleanup successfully" Jan 29 15:57:13.706735 containerd[1473]: time="2025-01-29T15:57:13.706711506Z" level=info msg="TearDown network for sandbox \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\" successfully" Jan 29 15:57:13.706793 containerd[1473]: time="2025-01-29T15:57:13.706730744Z" level=info msg="StopPodSandbox for \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\" returns successfully" Jan 29 15:57:13.707300 containerd[1473]: time="2025-01-29T15:57:13.707086232Z" level=info msg="StopPodSandbox for \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\"" Jan 29 15:57:13.707300 containerd[1473]: time="2025-01-29T15:57:13.707167425Z" level=info msg="TearDown network for sandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\" successfully" Jan 29 15:57:13.707300 containerd[1473]: time="2025-01-29T15:57:13.707176744Z" level=info msg="StopPodSandbox for \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\" returns successfully" Jan 29 15:57:13.707707 containerd[1473]: time="2025-01-29T15:57:13.707463519Z" level=info msg="StopPodSandbox for \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\"" Jan 29 15:57:13.707707 containerd[1473]: time="2025-01-29T15:57:13.707661581Z" level=info msg="TearDown network for sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\" successfully" Jan 29 15:57:13.707707 containerd[1473]: time="2025-01-29T15:57:13.707674220Z" level=info msg="StopPodSandbox for \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\" returns successfully" Jan 29 15:57:13.733546 containerd[1473]: time="2025-01-29T15:57:13.732123796Z" level=info msg="StopPodSandbox for \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\"" Jan 29 15:57:13.733546 containerd[1473]: time="2025-01-29T15:57:13.732249105Z" level=info msg="TearDown network for sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" successfully" Jan 29 15:57:13.733546 containerd[1473]: time="2025-01-29T15:57:13.732260184Z" level=info msg="StopPodSandbox for \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" returns successfully" Jan 29 15:57:13.733546 containerd[1473]: time="2025-01-29T15:57:13.732650709Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\"" Jan 29 15:57:13.733546 containerd[1473]: time="2025-01-29T15:57:13.732750620Z" level=info msg="TearDown network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" successfully" Jan 29 15:57:13.733546 containerd[1473]: time="2025-01-29T15:57:13.732785337Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" returns successfully" Jan 29 15:57:13.733546 containerd[1473]: time="2025-01-29T15:57:13.733012837Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\"" Jan 29 15:57:13.733546 containerd[1473]: time="2025-01-29T15:57:13.733150864Z" level=info msg="TearDown network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" successfully" Jan 29 15:57:13.733546 containerd[1473]: time="2025-01-29T15:57:13.733163303Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" returns successfully" Jan 29 15:57:13.762069 containerd[1473]: time="2025-01-29T15:57:13.759106986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:6,}" Jan 29 15:57:13.874001 sshd[4745]: Connection closed by 10.0.0.1 port 43930 Jan 29 15:57:13.875979 sshd-session[4743]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:13.888499 systemd[1]: run-netns-cni\x2dde042319\x2dfa85\x2d7d04\x2d9c8f\x2dc0c593c12fb0.mount: Deactivated successfully. Jan 29 15:57:13.888618 systemd[1]: run-netns-cni\x2ddd8b8296\x2d302f\x2d5b9e\x2d1e22\x2ddec2e14f7281.mount: Deactivated successfully. Jan 29 15:57:13.888674 systemd[1]: run-netns-cni\x2d7cf6564f\x2d0476\x2db864\x2d7b95\x2d837a500df8bc.mount: Deactivated successfully. Jan 29 15:57:13.888721 systemd[1]: run-netns-cni\x2de73f90d4\x2d3ee5\x2dd159\x2d0680\x2da0b0956a0f1e.mount: Deactivated successfully. Jan 29 15:57:13.888774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa-shm.mount: Deactivated successfully. Jan 29 15:57:13.888826 systemd[1]: run-netns-cni\x2db3847eb1\x2df912\x2d5c62\x2d11bc\x2d36faa7f61efc.mount: Deactivated successfully. Jan 29 15:57:13.888868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9-shm.mount: Deactivated successfully. Jan 29 15:57:13.888914 systemd[1]: run-netns-cni\x2d19a63a2f\x2dfba8\x2d98ee\x2d622f\x2d8536ac88164d.mount: Deactivated successfully. Jan 29 15:57:13.888963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e-shm.mount: Deactivated successfully. Jan 29 15:57:13.891907 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:43930.service: Deactivated successfully. Jan 29 15:57:13.895629 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 15:57:13.910483 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Jan 29 15:57:13.925326 systemd-logind[1458]: Removed session 8. Jan 29 15:57:14.176136 systemd-networkd[1419]: cali70f5f1d694f: Link UP Jan 29 15:57:14.176776 systemd-networkd[1419]: cali70f5f1d694f: Gained carrier Jan 29 15:57:14.185755 kubelet[2572]: I0129 15:57:14.185651 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pfntn" podStartSLOduration=2.740967641 podStartE2EDuration="15.185630841s" podCreationTimestamp="2025-01-29 15:56:59 +0000 UTC" firstStartedPulling="2025-01-29 15:57:00.392611035 +0000 UTC m=+15.037115295" lastFinishedPulling="2025-01-29 15:57:12.837274235 +0000 UTC m=+27.481778495" observedRunningTime="2025-01-29 15:57:13.663456209 +0000 UTC m=+28.307960469" watchObservedRunningTime="2025-01-29 15:57:14.185630841 +0000 UTC m=+28.830135101" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:13.749 [INFO][4772] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:13.853 [INFO][4772] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0 coredns-668d6bf9bc- kube-system ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef 772 0 2025-01-29 15:56:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-8bcmc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali70f5f1d694f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bcmc-" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:13.853 [INFO][4772] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.117 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" HandleID="k8s-pod-network.55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" Workload="localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.139 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" HandleID="k8s-pod-network.55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" Workload="localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000391000), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-8bcmc", "timestamp":"2025-01-29 15:57:14.117493937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.139 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.139 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.140 [INFO][4844] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.141 [INFO][4844] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" host="localhost" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.147 [INFO][4844] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.151 [INFO][4844] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.153 [INFO][4844] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.155 [INFO][4844] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.155 [INFO][4844] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" host="localhost" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.156 [INFO][4844] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.160 [INFO][4844] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" host="localhost" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.166 [INFO][4844] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" host="localhost" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.166 [INFO][4844] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" host="localhost" Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.166 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 15:57:14.190610 containerd[1473]: 2025-01-29 15:57:14.166 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" HandleID="k8s-pod-network.55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" Workload="localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0" Jan 29 15:57:14.191127 containerd[1473]: 2025-01-29 15:57:14.169 [INFO][4772] cni-plugin/k8s.go 386: Populated endpoint ContainerID="55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 56, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-8bcmc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70f5f1d694f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.191127 containerd[1473]: 2025-01-29 15:57:14.169 [INFO][4772] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0" Jan 29 15:57:14.191127 containerd[1473]: 2025-01-29 15:57:14.169 [INFO][4772] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70f5f1d694f ContainerID="55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0" Jan 29 15:57:14.191127 containerd[1473]: 2025-01-29 15:57:14.177 [INFO][4772] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0" Jan 29 15:57:14.191127 containerd[1473]: 2025-01-29 15:57:14.178 [INFO][4772] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 56, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a", Pod:"coredns-668d6bf9bc-8bcmc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70f5f1d694f", MAC:"0e:14:49:f3:e7:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.191127 containerd[1473]: 2025-01-29 15:57:14.187 [INFO][4772] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a" Namespace="kube-system" Pod="coredns-668d6bf9bc-8bcmc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8bcmc-eth0" Jan 29 15:57:14.230890 containerd[1473]: time="2025-01-29T15:57:14.230781935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:57:14.230890 containerd[1473]: time="2025-01-29T15:57:14.230854688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:57:14.230890 containerd[1473]: time="2025-01-29T15:57:14.230876606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.231713 containerd[1473]: time="2025-01-29T15:57:14.231635341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.257954 systemd[1]: Started cri-containerd-55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a.scope - libcontainer container 55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a. Jan 29 15:57:14.271657 systemd-networkd[1419]: calida6bc5018c2: Link UP Jan 29 15:57:14.271864 systemd-networkd[1419]: calida6bc5018c2: Gained carrier Jan 29 15:57:14.273457 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:13.815 [INFO][4794] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:13.851 [INFO][4794] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dqj97-eth0 csi-node-driver- calico-system 8d0ca05b-3272-4e38-9a00-746f382615ae 654 0 2025-01-29 15:57:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dqj97 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calida6bc5018c2 [] []}} ContainerID="63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" Namespace="calico-system" Pod="csi-node-driver-dqj97" WorkloadEndpoint="localhost-k8s-csi--node--driver--dqj97-" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:13.853 [INFO][4794] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" Namespace="calico-system" Pod="csi-node-driver-dqj97" WorkloadEndpoint="localhost-k8s-csi--node--driver--dqj97-eth0" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.114 [INFO][4841] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" HandleID="k8s-pod-network.63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" Workload="localhost-k8s-csi--node--driver--dqj97-eth0" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.139 [INFO][4841] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" HandleID="k8s-pod-network.63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" Workload="localhost-k8s-csi--node--driver--dqj97-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cdd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dqj97", "timestamp":"2025-01-29 15:57:14.11480133 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.139 [INFO][4841] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.166 [INFO][4841] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.166 [INFO][4841] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.244 [INFO][4841] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" host="localhost" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.248 [INFO][4841] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.253 [INFO][4841] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.254 [INFO][4841] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.256 [INFO][4841] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.256 [INFO][4841] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" host="localhost" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.258 [INFO][4841] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1 Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.261 [INFO][4841] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" host="localhost" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.266 [INFO][4841] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" host="localhost" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.266 [INFO][4841] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" host="localhost" Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.266 [INFO][4841] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 15:57:14.285537 containerd[1473]: 2025-01-29 15:57:14.266 [INFO][4841] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" HandleID="k8s-pod-network.63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" Workload="localhost-k8s-csi--node--driver--dqj97-eth0" Jan 29 15:57:14.286777 containerd[1473]: 2025-01-29 15:57:14.268 [INFO][4794] cni-plugin/k8s.go 386: Populated endpoint ContainerID="63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" Namespace="calico-system" Pod="csi-node-driver-dqj97" WorkloadEndpoint="localhost-k8s-csi--node--driver--dqj97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dqj97-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d0ca05b-3272-4e38-9a00-746f382615ae", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dqj97", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida6bc5018c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.286777 containerd[1473]: 2025-01-29 15:57:14.268 [INFO][4794] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" Namespace="calico-system" Pod="csi-node-driver-dqj97" WorkloadEndpoint="localhost-k8s-csi--node--driver--dqj97-eth0" Jan 29 15:57:14.286777 containerd[1473]: 2025-01-29 15:57:14.268 [INFO][4794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida6bc5018c2 ContainerID="63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" Namespace="calico-system" Pod="csi-node-driver-dqj97" WorkloadEndpoint="localhost-k8s-csi--node--driver--dqj97-eth0" Jan 29 15:57:14.286777 containerd[1473]: 2025-01-29 15:57:14.270 [INFO][4794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" Namespace="calico-system" Pod="csi-node-driver-dqj97" WorkloadEndpoint="localhost-k8s-csi--node--driver--dqj97-eth0" Jan 29 15:57:14.286777 containerd[1473]: 2025-01-29 15:57:14.270 [INFO][4794] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" Namespace="calico-system" Pod="csi-node-driver-dqj97" WorkloadEndpoint="localhost-k8s-csi--node--driver--dqj97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dqj97-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8d0ca05b-3272-4e38-9a00-746f382615ae", ResourceVersion:"654", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1", Pod:"csi-node-driver-dqj97", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calida6bc5018c2", MAC:"1a:ce:50:f9:69:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.286777 containerd[1473]: 2025-01-29 15:57:14.282 [INFO][4794] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1" Namespace="calico-system" Pod="csi-node-driver-dqj97" WorkloadEndpoint="localhost-k8s-csi--node--driver--dqj97-eth0" Jan 29 15:57:14.295855 containerd[1473]: time="2025-01-29T15:57:14.295812708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8bcmc,Uid:ba02e7a6-9c5e-4aac-977f-d5a6845ef7ef,Namespace:kube-system,Attempt:6,} returns sandbox id \"55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a\"" Jan 29 15:57:14.296641 kubelet[2572]: E0129 15:57:14.296617 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:14.299393 containerd[1473]: time="2025-01-29T15:57:14.299363800Z" level=info msg="CreateContainer within sandbox \"55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 15:57:14.307829 containerd[1473]: time="2025-01-29T15:57:14.307668842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:57:14.308014 containerd[1473]: time="2025-01-29T15:57:14.307779232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:57:14.308014 containerd[1473]: time="2025-01-29T15:57:14.307795751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.308610 containerd[1473]: time="2025-01-29T15:57:14.308415897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.322933 containerd[1473]: time="2025-01-29T15:57:14.322884005Z" level=info msg="CreateContainer within sandbox \"55999f4cce1b9fa519de43dedf4c94b17e3448e4f99c436f782a604823cd435a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f23c225965e5248aad2ba8fbaf38d0a26a325baf2aa2949b2885d9f44160eff0\"" Jan 29 15:57:14.323902 containerd[1473]: time="2025-01-29T15:57:14.323873040Z" level=info msg="StartContainer for \"f23c225965e5248aad2ba8fbaf38d0a26a325baf2aa2949b2885d9f44160eff0\"" Jan 29 15:57:14.329816 systemd[1]: Started cri-containerd-63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1.scope - libcontainer container 63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1. Jan 29 15:57:14.342512 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 15:57:14.348100 systemd[1]: Started cri-containerd-f23c225965e5248aad2ba8fbaf38d0a26a325baf2aa2949b2885d9f44160eff0.scope - libcontainer container f23c225965e5248aad2ba8fbaf38d0a26a325baf2aa2949b2885d9f44160eff0. Jan 29 15:57:14.366172 containerd[1473]: time="2025-01-29T15:57:14.366133743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dqj97,Uid:8d0ca05b-3272-4e38-9a00-746f382615ae,Namespace:calico-system,Attempt:5,} returns sandbox id \"63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1\"" Jan 29 15:57:14.369281 containerd[1473]: time="2025-01-29T15:57:14.368994376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 15:57:14.381434 containerd[1473]: time="2025-01-29T15:57:14.381391943Z" level=info msg="StartContainer for \"f23c225965e5248aad2ba8fbaf38d0a26a325baf2aa2949b2885d9f44160eff0\" returns successfully" Jan 29 15:57:14.383550 systemd-networkd[1419]: calicabd2aa4537: Link UP Jan 29 15:57:14.385105 systemd-networkd[1419]: calicabd2aa4537: Gained carrier Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:13.879 [INFO][4824] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:13.924 [INFO][4824] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0 calico-kube-controllers-596fc4546b- calico-system 6b44de13-eff2-42e8-844c-3aa53fc7af03 777 0 2025-01-29 15:57:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:596fc4546b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-596fc4546b-ddndj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicabd2aa4537 [] []}} ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Namespace="calico-system" Pod="calico-kube-controllers-596fc4546b-ddndj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:13.924 [INFO][4824] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Namespace="calico-system" Pod="calico-kube-controllers-596fc4546b-ddndj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.113 [INFO][4870] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" HandleID="k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Workload="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.140 [INFO][4870] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" HandleID="k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Workload="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-596fc4546b-ddndj", "timestamp":"2025-01-29 15:57:14.113822774 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.140 [INFO][4870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.266 [INFO][4870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.266 [INFO][4870] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.345 [INFO][4870] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" host="localhost" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.350 [INFO][4870] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.354 [INFO][4870] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.356 [INFO][4870] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.360 [INFO][4870] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.360 [INFO][4870] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" host="localhost" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.361 [INFO][4870] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0 Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.367 [INFO][4870] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" host="localhost" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.376 [INFO][4870] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" host="localhost" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.376 [INFO][4870] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" host="localhost" Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.376 [INFO][4870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 15:57:14.394837 containerd[1473]: 2025-01-29 15:57:14.376 [INFO][4870] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" HandleID="k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Workload="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" Jan 29 15:57:14.395574 containerd[1473]: 2025-01-29 15:57:14.379 [INFO][4824] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Namespace="calico-system" Pod="calico-kube-controllers-596fc4546b-ddndj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0", GenerateName:"calico-kube-controllers-596fc4546b-", Namespace:"calico-system", SelfLink:"", UID:"6b44de13-eff2-42e8-844c-3aa53fc7af03", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596fc4546b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-596fc4546b-ddndj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicabd2aa4537", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.395574 containerd[1473]: 2025-01-29 15:57:14.379 [INFO][4824] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Namespace="calico-system" Pod="calico-kube-controllers-596fc4546b-ddndj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" Jan 29 15:57:14.395574 containerd[1473]: 2025-01-29 15:57:14.379 [INFO][4824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicabd2aa4537 ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Namespace="calico-system" Pod="calico-kube-controllers-596fc4546b-ddndj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" Jan 29 15:57:14.395574 containerd[1473]: 2025-01-29 15:57:14.383 [INFO][4824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Namespace="calico-system" Pod="calico-kube-controllers-596fc4546b-ddndj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" Jan 29 15:57:14.395574 containerd[1473]: 2025-01-29 15:57:14.383 [INFO][4824] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Namespace="calico-system" Pod="calico-kube-controllers-596fc4546b-ddndj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0", GenerateName:"calico-kube-controllers-596fc4546b-", Namespace:"calico-system", SelfLink:"", UID:"6b44de13-eff2-42e8-844c-3aa53fc7af03", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596fc4546b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0", Pod:"calico-kube-controllers-596fc4546b-ddndj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicabd2aa4537", MAC:"de:c4:30:84:43:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.395574 containerd[1473]: 2025-01-29 15:57:14.392 [INFO][4824] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Namespace="calico-system" Pod="calico-kube-controllers-596fc4546b-ddndj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" Jan 29 15:57:14.431745 containerd[1473]: time="2025-01-29T15:57:14.418079649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:57:14.431745 containerd[1473]: time="2025-01-29T15:57:14.431493888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:57:14.431745 containerd[1473]: time="2025-01-29T15:57:14.431508607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.432042 containerd[1473]: time="2025-01-29T15:57:14.431624797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.461778 systemd[1]: Started cri-containerd-1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0.scope - libcontainer container 1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0. Jan 29 15:57:14.476402 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 15:57:14.478072 systemd-networkd[1419]: calid7b347a7fc7: Link UP Jan 29 15:57:14.478823 systemd-networkd[1419]: calid7b347a7fc7: Gained carrier Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:13.719 [INFO][4763] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:13.852 [INFO][4763] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--x5c64-eth0 coredns-668d6bf9bc- kube-system e13e233f-36bf-4ccc-9393-e6e06b49a20a 774 0 2025-01-29 15:56:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-x5c64 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid7b347a7fc7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5c64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5c64-" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:13.852 [INFO][4763] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5c64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5c64-eth0" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.113 [INFO][4842] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" HandleID="k8s-pod-network.c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" Workload="localhost-k8s-coredns--668d6bf9bc--x5c64-eth0" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.140 [INFO][4842] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" HandleID="k8s-pod-network.c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" Workload="localhost-k8s-coredns--668d6bf9bc--x5c64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000496c90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-x5c64", "timestamp":"2025-01-29 15:57:14.113824534 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.141 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.377 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.377 [INFO][4842] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.444 [INFO][4842] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" host="localhost" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.450 [INFO][4842] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.456 [INFO][4842] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.458 [INFO][4842] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.460 [INFO][4842] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.460 [INFO][4842] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" host="localhost" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.461 [INFO][4842] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188 Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.466 [INFO][4842] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" host="localhost" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.472 [INFO][4842] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" host="localhost" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.472 [INFO][4842] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" host="localhost" Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.472 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 15:57:14.492991 containerd[1473]: 2025-01-29 15:57:14.472 [INFO][4842] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" HandleID="k8s-pod-network.c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" Workload="localhost-k8s-coredns--668d6bf9bc--x5c64-eth0" Jan 29 15:57:14.494973 containerd[1473]: 2025-01-29 15:57:14.475 [INFO][4763] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5c64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5c64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x5c64-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e13e233f-36bf-4ccc-9393-e6e06b49a20a", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 56, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-x5c64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7b347a7fc7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.494973 containerd[1473]: 2025-01-29 15:57:14.476 [INFO][4763] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5c64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5c64-eth0" Jan 29 15:57:14.494973 containerd[1473]: 2025-01-29 15:57:14.476 [INFO][4763] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7b347a7fc7 ContainerID="c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5c64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5c64-eth0" Jan 29 15:57:14.494973 containerd[1473]: 2025-01-29 15:57:14.479 [INFO][4763] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5c64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5c64-eth0" Jan 29 15:57:14.494973 containerd[1473]: 2025-01-29 15:57:14.479 [INFO][4763] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5c64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5c64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--x5c64-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e13e233f-36bf-4ccc-9393-e6e06b49a20a", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 56, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188", Pod:"coredns-668d6bf9bc-x5c64", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7b347a7fc7", MAC:"02:c0:76:a9:8f:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.494973 containerd[1473]: 2025-01-29 15:57:14.490 [INFO][4763] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188" Namespace="kube-system" Pod="coredns-668d6bf9bc-x5c64" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--x5c64-eth0" Jan 29 15:57:14.520281 containerd[1473]: time="2025-01-29T15:57:14.520111260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596fc4546b-ddndj,Uid:6b44de13-eff2-42e8-844c-3aa53fc7af03,Namespace:calico-system,Attempt:6,} returns sandbox id \"1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0\"" Jan 29 15:57:14.545201 containerd[1473]: time="2025-01-29T15:57:14.543260257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:57:14.545201 containerd[1473]: time="2025-01-29T15:57:14.544558305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:57:14.545201 containerd[1473]: time="2025-01-29T15:57:14.544572504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.545201 containerd[1473]: time="2025-01-29T15:57:14.544670575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.567758 systemd[1]: Started cri-containerd-c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188.scope - libcontainer container c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188. Jan 29 15:57:14.584927 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 15:57:14.609147 systemd-networkd[1419]: cali29c8b79ca7c: Link UP Jan 29 15:57:14.611777 systemd-networkd[1419]: cali29c8b79ca7c: Gained carrier Jan 29 15:57:14.618739 containerd[1473]: time="2025-01-29T15:57:14.618685531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x5c64,Uid:e13e233f-36bf-4ccc-9393-e6e06b49a20a,Namespace:kube-system,Attempt:6,} returns sandbox id \"c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188\"" Jan 29 15:57:14.621036 kubelet[2572]: E0129 15:57:14.621008 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:14.624857 containerd[1473]: time="2025-01-29T15:57:14.624822640Z" level=info msg="CreateContainer within sandbox \"c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:13.715 [INFO][4746] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:13.852 [INFO][4746] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0 calico-apiserver-7fd9769987- calico-apiserver eb0b4103-78c0-4eef-8691-75f802520548 775 0 2025-01-29 15:57:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fd9769987 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7fd9769987-pstm9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali29c8b79ca7c [] []}} ContainerID="45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-pstm9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--pstm9-" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:13.852 [INFO][4746] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-pstm9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.121 [INFO][4854] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" HandleID="k8s-pod-network.45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" Workload="localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.144 [INFO][4854] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" HandleID="k8s-pod-network.45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" Workload="localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c13c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7fd9769987-pstm9", "timestamp":"2025-01-29 15:57:14.121824282 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.144 [INFO][4854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.472 [INFO][4854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.472 [INFO][4854] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.544 [INFO][4854] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" host="localhost" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.555 [INFO][4854] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.562 [INFO][4854] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.567 [INFO][4854] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.571 [INFO][4854] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.571 [INFO][4854] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" host="localhost" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.574 [INFO][4854] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589 Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.578 [INFO][4854] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" host="localhost" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.594 [INFO][4854] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" host="localhost" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.594 [INFO][4854] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" host="localhost" Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.594 [INFO][4854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 15:57:14.631230 containerd[1473]: 2025-01-29 15:57:14.594 [INFO][4854] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" HandleID="k8s-pod-network.45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" Workload="localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0" Jan 29 15:57:14.632410 containerd[1473]: 2025-01-29 15:57:14.602 [INFO][4746] cni-plugin/k8s.go 386: Populated endpoint ContainerID="45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-pstm9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0", GenerateName:"calico-apiserver-7fd9769987-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb0b4103-78c0-4eef-8691-75f802520548", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd9769987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7fd9769987-pstm9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29c8b79ca7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.632410 containerd[1473]: 2025-01-29 15:57:14.602 [INFO][4746] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-pstm9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0" Jan 29 15:57:14.632410 containerd[1473]: 2025-01-29 15:57:14.602 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29c8b79ca7c ContainerID="45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-pstm9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0" Jan 29 15:57:14.632410 containerd[1473]: 2025-01-29 15:57:14.612 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-pstm9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0" Jan 29 15:57:14.632410 containerd[1473]: 2025-01-29 15:57:14.614 [INFO][4746] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-pstm9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0", GenerateName:"calico-apiserver-7fd9769987-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb0b4103-78c0-4eef-8691-75f802520548", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd9769987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589", Pod:"calico-apiserver-7fd9769987-pstm9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29c8b79ca7c", MAC:"6a:40:76:40:c1:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.632410 containerd[1473]: 2025-01-29 15:57:14.628 [INFO][4746] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-pstm9" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--pstm9-eth0" Jan 29 15:57:14.671249 containerd[1473]: time="2025-01-29T15:57:14.671210106Z" level=info msg="CreateContainer within sandbox \"c997efe6fd1d99ac77595edf7250a20bf651c0685b81db78fd433d7c7b55f188\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4c8fac88acfd00d2b94bf208e2b09b938eb09f302e048f7e25abd77756fe3ea6\"" Jan 29 15:57:14.678378 containerd[1473]: time="2025-01-29T15:57:14.678095431Z" level=info msg="StartContainer for \"4c8fac88acfd00d2b94bf208e2b09b938eb09f302e048f7e25abd77756fe3ea6\"" Jan 29 15:57:14.688983 containerd[1473]: time="2025-01-29T15:57:14.687467940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:57:14.688983 containerd[1473]: time="2025-01-29T15:57:14.687514896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:57:14.688983 containerd[1473]: time="2025-01-29T15:57:14.687525055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.688983 containerd[1473]: time="2025-01-29T15:57:14.687611367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.717939 systemd[1]: Started cri-containerd-45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589.scope - libcontainer container 45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589. Jan 29 15:57:14.729885 systemd-networkd[1419]: cali626e8b5ae63: Link UP Jan 29 15:57:14.731148 systemd-networkd[1419]: cali626e8b5ae63: Gained carrier Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:13.789 [INFO][4806] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:13.853 [INFO][4806] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0 calico-apiserver-7fd9769987- calico-apiserver cc63fcad-580a-4790-b92e-85d54cee6129 776 0 2025-01-29 15:57:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fd9769987 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7fd9769987-jfw4v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali626e8b5ae63 [] []}} ContainerID="f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-jfw4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--jfw4v-" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:13.853 [INFO][4806] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-jfw4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.114 [INFO][4843] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" HandleID="k8s-pod-network.f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" Workload="localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.144 [INFO][4843] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" HandleID="k8s-pod-network.f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" Workload="localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000426320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7fd9769987-jfw4v", "timestamp":"2025-01-29 15:57:14.114235659 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.144 [INFO][4843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.595 [INFO][4843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.600 [INFO][4843] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.650 [INFO][4843] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" host="localhost" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.658 [INFO][4843] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.682 [INFO][4843] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.685 [INFO][4843] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.692 [INFO][4843] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.692 [INFO][4843] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" host="localhost" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.694 [INFO][4843] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.707 [INFO][4843] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" host="localhost" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.721 [INFO][4843] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" host="localhost" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.721 [INFO][4843] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" host="localhost" Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.721 [INFO][4843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 15:57:14.746856 containerd[1473]: 2025-01-29 15:57:14.721 [INFO][4843] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" HandleID="k8s-pod-network.f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" Workload="localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0" Jan 29 15:57:14.747373 containerd[1473]: 2025-01-29 15:57:14.724 [INFO][4806] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-jfw4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0", GenerateName:"calico-apiserver-7fd9769987-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc63fcad-580a-4790-b92e-85d54cee6129", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd9769987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7fd9769987-jfw4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali626e8b5ae63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.747373 containerd[1473]: 2025-01-29 15:57:14.724 [INFO][4806] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-jfw4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0" Jan 29 15:57:14.747373 containerd[1473]: 2025-01-29 15:57:14.725 [INFO][4806] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali626e8b5ae63 ContainerID="f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-jfw4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0" Jan 29 15:57:14.747373 containerd[1473]: 2025-01-29 15:57:14.730 [INFO][4806] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-jfw4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0" Jan 29 15:57:14.747373 containerd[1473]: 2025-01-29 15:57:14.730 [INFO][4806] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-jfw4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0", GenerateName:"calico-apiserver-7fd9769987-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc63fcad-580a-4790-b92e-85d54cee6129", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 57, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd9769987", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf", Pod:"calico-apiserver-7fd9769987-jfw4v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali626e8b5ae63", MAC:"ba:9f:20:ea:7d:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:14.747373 containerd[1473]: 2025-01-29 15:57:14.743 [INFO][4806] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf" Namespace="calico-apiserver" Pod="calico-apiserver-7fd9769987-jfw4v" WorkloadEndpoint="localhost-k8s-calico--apiserver--7fd9769987--jfw4v-eth0" Jan 29 15:57:14.754494 kubelet[2572]: E0129 15:57:14.754301 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:14.755764 systemd[1]: Started cri-containerd-4c8fac88acfd00d2b94bf208e2b09b938eb09f302e048f7e25abd77756fe3ea6.scope - libcontainer container 4c8fac88acfd00d2b94bf208e2b09b938eb09f302e048f7e25abd77756fe3ea6. Jan 29 15:57:14.773065 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 15:57:14.777171 kubelet[2572]: I0129 15:57:14.776356 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:57:14.777171 kubelet[2572]: E0129 15:57:14.776751 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:14.788867 containerd[1473]: time="2025-01-29T15:57:14.788519436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:57:14.788867 containerd[1473]: time="2025-01-29T15:57:14.788577111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:57:14.788867 containerd[1473]: time="2025-01-29T15:57:14.788601829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.790351 containerd[1473]: time="2025-01-29T15:57:14.790161934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:14.798697 kubelet[2572]: I0129 15:57:14.797772 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8bcmc" podStartSLOduration=22.797751197 podStartE2EDuration="22.797751197s" podCreationTimestamp="2025-01-29 15:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 15:57:14.77357141 +0000 UTC m=+29.418075630" watchObservedRunningTime="2025-01-29 15:57:14.797751197 +0000 UTC m=+29.442255457" Jan 29 15:57:14.818195 containerd[1473]: time="2025-01-29T15:57:14.817819141Z" level=info msg="StartContainer for \"4c8fac88acfd00d2b94bf208e2b09b938eb09f302e048f7e25abd77756fe3ea6\" returns successfully" Jan 29 15:57:14.825781 systemd[1]: Started cri-containerd-f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf.scope - libcontainer container f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf. Jan 29 15:57:14.836524 containerd[1473]: time="2025-01-29T15:57:14.836428571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-pstm9,Uid:eb0b4103-78c0-4eef-8691-75f802520548,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589\"" Jan 29 15:57:14.867604 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 15:57:14.885616 kernel: bpftool[5408]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 15:57:14.914571 containerd[1473]: time="2025-01-29T15:57:14.914532173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd9769987-jfw4v,Uid:cc63fcad-580a-4790-b92e-85d54cee6129,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf\"" Jan 29 15:57:15.054108 systemd-networkd[1419]: vxlan.calico: Link UP Jan 29 15:57:15.054303 systemd-networkd[1419]: vxlan.calico: Gained carrier Jan 29 15:57:15.467797 containerd[1473]: time="2025-01-29T15:57:15.467675092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:15.468271 containerd[1473]: time="2025-01-29T15:57:15.468163971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 15:57:15.469377 containerd[1473]: time="2025-01-29T15:57:15.469329754Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:15.472793 containerd[1473]: time="2025-01-29T15:57:15.472700871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:15.473614 containerd[1473]: time="2025-01-29T15:57:15.473549600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.104521468s" Jan 29 15:57:15.473614 containerd[1473]: time="2025-01-29T15:57:15.473579037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 15:57:15.474647 containerd[1473]: time="2025-01-29T15:57:15.474452564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 15:57:15.477246 containerd[1473]: time="2025-01-29T15:57:15.477110222Z" level=info msg="CreateContainer within sandbox \"63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 15:57:15.492266 containerd[1473]: time="2025-01-29T15:57:15.492146441Z" level=info msg="CreateContainer within sandbox \"63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5debd85473389609cc9c45396d4f3dc416ab0a75556660d68340ce04f006e145\"" Jan 29 15:57:15.492787 containerd[1473]: time="2025-01-29T15:57:15.492614242Z" level=info msg="StartContainer for \"5debd85473389609cc9c45396d4f3dc416ab0a75556660d68340ce04f006e145\"" Jan 29 15:57:15.522740 systemd[1]: Started cri-containerd-5debd85473389609cc9c45396d4f3dc416ab0a75556660d68340ce04f006e145.scope - libcontainer container 5debd85473389609cc9c45396d4f3dc416ab0a75556660d68340ce04f006e145. Jan 29 15:57:15.552649 containerd[1473]: time="2025-01-29T15:57:15.550938793Z" level=info msg="StartContainer for \"5debd85473389609cc9c45396d4f3dc416ab0a75556660d68340ce04f006e145\" returns successfully" Jan 29 15:57:15.782945 kubelet[2572]: E0129 15:57:15.782615 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:15.803332 kubelet[2572]: I0129 15:57:15.803275 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-x5c64" podStartSLOduration=23.803259243 podStartE2EDuration="23.803259243s" podCreationTimestamp="2025-01-29 15:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 15:57:15.801455195 +0000 UTC m=+30.445959455" watchObservedRunningTime="2025-01-29 15:57:15.803259243 +0000 UTC m=+30.447763503" Jan 29 15:57:15.813412 kubelet[2572]: E0129 15:57:15.813383 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:15.814870 kubelet[2572]: E0129 15:57:15.813939 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:15.837735 systemd-networkd[1419]: calid7b347a7fc7: Gained IPv6LL Jan 29 15:57:15.979651 systemd-networkd[1419]: calicabd2aa4537: Gained IPv6LL Jan 29 15:57:16.093748 systemd-networkd[1419]: cali70f5f1d694f: Gained IPv6LL Jan 29 15:57:16.221759 systemd-networkd[1419]: calida6bc5018c2: Gained IPv6LL Jan 29 15:57:16.222191 systemd-networkd[1419]: cali29c8b79ca7c: Gained IPv6LL Jan 29 15:57:16.478652 systemd-networkd[1419]: vxlan.calico: Gained IPv6LL Jan 29 15:57:16.797759 systemd-networkd[1419]: cali626e8b5ae63: Gained IPv6LL Jan 29 15:57:16.815422 kubelet[2572]: E0129 15:57:16.815389 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:16.816139 kubelet[2572]: E0129 15:57:16.815534 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:16.909004 containerd[1473]: time="2025-01-29T15:57:16.908951942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:16.909942 containerd[1473]: time="2025-01-29T15:57:16.909899346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 15:57:16.910658 containerd[1473]: time="2025-01-29T15:57:16.910628806Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:16.913752 containerd[1473]: time="2025-01-29T15:57:16.913716556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:16.914234 containerd[1473]: time="2025-01-29T15:57:16.914190637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.439704276s" Jan 29 15:57:16.914234 containerd[1473]: time="2025-01-29T15:57:16.914224314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 15:57:16.916104 containerd[1473]: time="2025-01-29T15:57:16.916042247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 15:57:16.923533 containerd[1473]: time="2025-01-29T15:57:16.923471323Z" level=info msg="CreateContainer within sandbox \"1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 15:57:16.933052 containerd[1473]: time="2025-01-29T15:57:16.933004189Z" level=info msg="CreateContainer within sandbox \"1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\"" Jan 29 15:57:16.933688 containerd[1473]: time="2025-01-29T15:57:16.933635018Z" level=info msg="StartContainer for \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\"" Jan 29 15:57:16.969754 systemd[1]: Started cri-containerd-92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812.scope - libcontainer container 92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812. Jan 29 15:57:17.003509 containerd[1473]: time="2025-01-29T15:57:17.003454276Z" level=info msg="StartContainer for \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\" returns successfully" Jan 29 15:57:17.821628 kubelet[2572]: E0129 15:57:17.820985 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:17.832120 kubelet[2572]: I0129 15:57:17.832037 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-596fc4546b-ddndj" podStartSLOduration=15.438569623 podStartE2EDuration="17.832021058s" podCreationTimestamp="2025-01-29 15:57:00 +0000 UTC" firstStartedPulling="2025-01-29 15:57:14.521485261 +0000 UTC m=+29.165989521" lastFinishedPulling="2025-01-29 15:57:16.914936736 +0000 UTC m=+31.559440956" observedRunningTime="2025-01-29 15:57:17.830865349 +0000 UTC m=+32.475369609" watchObservedRunningTime="2025-01-29 15:57:17.832021058 +0000 UTC m=+32.476525318" Jan 29 15:57:18.595010 containerd[1473]: time="2025-01-29T15:57:18.594966543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:18.595930 containerd[1473]: time="2025-01-29T15:57:18.595715246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 15:57:18.596619 containerd[1473]: time="2025-01-29T15:57:18.596570301Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:18.599839 containerd[1473]: time="2025-01-29T15:57:18.599796895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:18.600495 containerd[1473]: time="2025-01-29T15:57:18.600296057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.684221452s" Jan 29 15:57:18.600495 containerd[1473]: time="2025-01-29T15:57:18.600322615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 15:57:18.602031 containerd[1473]: time="2025-01-29T15:57:18.601960330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 15:57:18.602763 containerd[1473]: time="2025-01-29T15:57:18.602679635Z" level=info msg="CreateContainer within sandbox \"45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 15:57:18.614711 containerd[1473]: time="2025-01-29T15:57:18.614672761Z" level=info msg="CreateContainer within sandbox \"45d3f49503cebcaf9a8b18ab78747629ff41c39a8a9550c40faa48e4b6acb589\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"daff9c8ffd367cd056cdf645ad4e10cb78af5dbed1d5945cd3e9e7f55084f621\"" Jan 29 15:57:18.615296 containerd[1473]: time="2025-01-29T15:57:18.615107488Z" level=info msg="StartContainer for \"daff9c8ffd367cd056cdf645ad4e10cb78af5dbed1d5945cd3e9e7f55084f621\"" Jan 29 15:57:18.645742 systemd[1]: Started cri-containerd-daff9c8ffd367cd056cdf645ad4e10cb78af5dbed1d5945cd3e9e7f55084f621.scope - libcontainer container daff9c8ffd367cd056cdf645ad4e10cb78af5dbed1d5945cd3e9e7f55084f621. Jan 29 15:57:18.677220 containerd[1473]: time="2025-01-29T15:57:18.677171158Z" level=info msg="StartContainer for \"daff9c8ffd367cd056cdf645ad4e10cb78af5dbed1d5945cd3e9e7f55084f621\" returns successfully" Jan 29 15:57:18.826322 kubelet[2572]: I0129 15:57:18.826287 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:57:18.836523 kubelet[2572]: I0129 15:57:18.836420 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fd9769987-pstm9" podStartSLOduration=15.073491767 podStartE2EDuration="18.836402504s" podCreationTimestamp="2025-01-29 15:57:00 +0000 UTC" firstStartedPulling="2025-01-29 15:57:14.838205737 +0000 UTC m=+29.482709957" lastFinishedPulling="2025-01-29 15:57:18.601116434 +0000 UTC m=+33.245620694" observedRunningTime="2025-01-29 15:57:18.836228797 +0000 UTC m=+33.480733057" watchObservedRunningTime="2025-01-29 15:57:18.836402504 +0000 UTC m=+33.480906764" Jan 29 15:57:18.858553 containerd[1473]: time="2025-01-29T15:57:18.858448104Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:18.859538 containerd[1473]: time="2025-01-29T15:57:18.859494224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 15:57:18.861504 containerd[1473]: time="2025-01-29T15:57:18.861472674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 259.469427ms" Jan 29 15:57:18.861555 containerd[1473]: time="2025-01-29T15:57:18.861505991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 15:57:18.863257 containerd[1473]: time="2025-01-29T15:57:18.863228020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 15:57:18.863677 containerd[1473]: time="2025-01-29T15:57:18.863645588Z" level=info msg="CreateContainer within sandbox \"f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 15:57:18.875916 containerd[1473]: time="2025-01-29T15:57:18.875871576Z" level=info msg="CreateContainer within sandbox \"f746a7bffb2bcb0767329484176ed10f9d6fb4d21fb18a9f38dc24b15edfb4bf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c23e47b32e80e67b177450a805caad67b3685955039f85d085167281bd1ee05a\"" Jan 29 15:57:18.876658 containerd[1473]: time="2025-01-29T15:57:18.876630679Z" level=info msg="StartContainer for \"c23e47b32e80e67b177450a805caad67b3685955039f85d085167281bd1ee05a\"" Jan 29 15:57:18.889619 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:43944.service - OpenSSH per-connection server daemon (10.0.0.1:43944). Jan 29 15:57:18.904712 systemd[1]: Started cri-containerd-c23e47b32e80e67b177450a805caad67b3685955039f85d085167281bd1ee05a.scope - libcontainer container c23e47b32e80e67b177450a805caad67b3685955039f85d085167281bd1ee05a. Jan 29 15:57:18.950278 containerd[1473]: time="2025-01-29T15:57:18.948785980Z" level=info msg="StartContainer for \"c23e47b32e80e67b177450a805caad67b3685955039f85d085167281bd1ee05a\" returns successfully" Jan 29 15:57:18.966522 sshd[5708]: Accepted publickey for core from 10.0.0.1 port 43944 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:18.968119 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:18.975174 systemd-logind[1458]: New session 9 of user core. Jan 29 15:57:18.980760 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 15:57:19.289993 sshd[5742]: Connection closed by 10.0.0.1 port 43944 Jan 29 15:57:19.290644 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:19.294574 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:43944.service: Deactivated successfully. Jan 29 15:57:19.296344 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 15:57:19.297912 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Jan 29 15:57:19.299451 systemd-logind[1458]: Removed session 9. Jan 29 15:57:20.268921 kubelet[2572]: I0129 15:57:20.268859 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7fd9769987-jfw4v" podStartSLOduration=16.322556979 podStartE2EDuration="20.268843424s" podCreationTimestamp="2025-01-29 15:57:00 +0000 UTC" firstStartedPulling="2025-01-29 15:57:14.915930892 +0000 UTC m=+29.560435152" lastFinishedPulling="2025-01-29 15:57:18.862217337 +0000 UTC m=+33.506721597" observedRunningTime="2025-01-29 15:57:19.857549928 +0000 UTC m=+34.502054188" watchObservedRunningTime="2025-01-29 15:57:20.268843424 +0000 UTC m=+34.913347684" Jan 29 15:57:20.281538 containerd[1473]: time="2025-01-29T15:57:20.281488840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:20.286571 containerd[1473]: time="2025-01-29T15:57:20.285780693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 15:57:20.287686 containerd[1473]: time="2025-01-29T15:57:20.287645079Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:20.291554 containerd[1473]: time="2025-01-29T15:57:20.291499324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 15:57:20.292522 containerd[1473]: time="2025-01-29T15:57:20.292479974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.429214876s" Jan 29 15:57:20.292522 containerd[1473]: time="2025-01-29T15:57:20.292518971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 15:57:20.307036 containerd[1473]: time="2025-01-29T15:57:20.306859905Z" level=info msg="CreateContainer within sandbox \"63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 15:57:20.371470 containerd[1473]: time="2025-01-29T15:57:20.371296377Z" level=info msg="CreateContainer within sandbox \"63052f1cc2b0867c56912fced6e7afc38c3ba72011dbf32f992f5b0afb3903a1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cd23c9166304b4b3a5ce8e7a7e512f0ffa9a6b7e671fbe360ae33c81590d5f7d\"" Jan 29 15:57:20.373794 containerd[1473]: time="2025-01-29T15:57:20.372524009Z" level=info msg="StartContainer for \"cd23c9166304b4b3a5ce8e7a7e512f0ffa9a6b7e671fbe360ae33c81590d5f7d\"" Jan 29 15:57:20.404786 systemd[1]: Started cri-containerd-cd23c9166304b4b3a5ce8e7a7e512f0ffa9a6b7e671fbe360ae33c81590d5f7d.scope - libcontainer container cd23c9166304b4b3a5ce8e7a7e512f0ffa9a6b7e671fbe360ae33c81590d5f7d. Jan 29 15:57:20.437182 containerd[1473]: time="2025-01-29T15:57:20.437132308Z" level=info msg="StartContainer for \"cd23c9166304b4b3a5ce8e7a7e512f0ffa9a6b7e671fbe360ae33c81590d5f7d\" returns successfully" Jan 29 15:57:20.499534 kubelet[2572]: I0129 15:57:20.499489 2572 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 15:57:20.507532 kubelet[2572]: I0129 15:57:20.507490 2572 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 15:57:20.850056 kubelet[2572]: I0129 15:57:20.849880 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:57:21.571446 kubelet[2572]: I0129 15:57:21.570962 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:57:21.628860 kubelet[2572]: I0129 15:57:21.628790 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dqj97" podStartSLOduration=15.704308804 podStartE2EDuration="21.628770288s" podCreationTimestamp="2025-01-29 15:57:00 +0000 UTC" firstStartedPulling="2025-01-29 15:57:14.368753077 +0000 UTC m=+29.013257337" lastFinishedPulling="2025-01-29 15:57:20.293214561 +0000 UTC m=+34.937718821" observedRunningTime="2025-01-29 15:57:20.862087077 +0000 UTC m=+35.506591337" watchObservedRunningTime="2025-01-29 15:57:21.628770288 +0000 UTC m=+36.273274548" Jan 29 15:57:24.302168 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:50226.service - OpenSSH per-connection server daemon (10.0.0.1:50226). Jan 29 15:57:24.361421 sshd[5859]: Accepted publickey for core from 10.0.0.1 port 50226 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:24.363032 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:24.368654 systemd-logind[1458]: New session 10 of user core. Jan 29 15:57:24.377733 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 15:57:24.619048 sshd[5861]: Connection closed by 10.0.0.1 port 50226 Jan 29 15:57:24.619709 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:24.627799 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:50226.service: Deactivated successfully. Jan 29 15:57:24.629457 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 15:57:24.630184 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Jan 29 15:57:24.640988 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:50240.service - OpenSSH per-connection server daemon (10.0.0.1:50240). Jan 29 15:57:24.642074 systemd-logind[1458]: Removed session 10. Jan 29 15:57:24.687301 sshd[5874]: Accepted publickey for core from 10.0.0.1 port 50240 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:24.689505 sshd-session[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:24.702330 systemd-logind[1458]: New session 11 of user core. Jan 29 15:57:24.713763 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 15:57:24.925231 sshd[5880]: Connection closed by 10.0.0.1 port 50240 Jan 29 15:57:24.926103 sshd-session[5874]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:24.945041 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:50242.service - OpenSSH per-connection server daemon (10.0.0.1:50242). Jan 29 15:57:24.946733 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:50240.service: Deactivated successfully. Jan 29 15:57:24.952143 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 15:57:24.955090 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Jan 29 15:57:24.957862 systemd-logind[1458]: Removed session 11. Jan 29 15:57:24.989623 sshd[5889]: Accepted publickey for core from 10.0.0.1 port 50242 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:24.990920 sshd-session[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:24.995183 systemd-logind[1458]: New session 12 of user core. Jan 29 15:57:25.003751 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 15:57:25.142887 sshd[5894]: Connection closed by 10.0.0.1 port 50242 Jan 29 15:57:25.143376 sshd-session[5889]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:25.148296 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:50242.service: Deactivated successfully. Jan 29 15:57:25.151313 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 15:57:25.153419 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Jan 29 15:57:25.154546 systemd-logind[1458]: Removed session 12. Jan 29 15:57:30.153674 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:50248.service - OpenSSH per-connection server daemon (10.0.0.1:50248). Jan 29 15:57:30.201650 sshd[5918]: Accepted publickey for core from 10.0.0.1 port 50248 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:30.202889 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:30.206872 systemd-logind[1458]: New session 13 of user core. Jan 29 15:57:30.217730 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 15:57:30.365695 sshd[5920]: Connection closed by 10.0.0.1 port 50248 Jan 29 15:57:30.366043 sshd-session[5918]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:30.370070 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:50248.service: Deactivated successfully. Jan 29 15:57:30.373298 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 15:57:30.374088 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Jan 29 15:57:30.375262 systemd-logind[1458]: Removed session 13. Jan 29 15:57:35.377988 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:38400.service - OpenSSH per-connection server daemon (10.0.0.1:38400). Jan 29 15:57:35.420579 sshd[5941]: Accepted publickey for core from 10.0.0.1 port 38400 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:35.421855 sshd-session[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:35.425922 systemd-logind[1458]: New session 14 of user core. Jan 29 15:57:35.432791 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 15:57:35.608637 sshd[5943]: Connection closed by 10.0.0.1 port 38400 Jan 29 15:57:35.609175 sshd-session[5941]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:35.625399 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:38400.service: Deactivated successfully. Jan 29 15:57:35.628748 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 15:57:35.631024 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Jan 29 15:57:35.638673 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:38402.service - OpenSSH per-connection server daemon (10.0.0.1:38402). Jan 29 15:57:35.639508 systemd-logind[1458]: Removed session 14. Jan 29 15:57:35.687832 sshd[5956]: Accepted publickey for core from 10.0.0.1 port 38402 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:35.689124 sshd-session[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:35.693360 systemd-logind[1458]: New session 15 of user core. Jan 29 15:57:35.703766 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 15:57:35.953930 sshd[5959]: Connection closed by 10.0.0.1 port 38402 Jan 29 15:57:35.953398 sshd-session[5956]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:35.969810 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:38402.service: Deactivated successfully. Jan 29 15:57:35.971449 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 15:57:35.973636 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Jan 29 15:57:35.980883 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:38412.service - OpenSSH per-connection server daemon (10.0.0.1:38412). Jan 29 15:57:35.981848 systemd-logind[1458]: Removed session 15. Jan 29 15:57:36.020620 sshd[5969]: Accepted publickey for core from 10.0.0.1 port 38412 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:36.021408 sshd-session[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:36.025642 systemd-logind[1458]: New session 16 of user core. Jan 29 15:57:36.035761 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 15:57:36.795288 sshd[5972]: Connection closed by 10.0.0.1 port 38412 Jan 29 15:57:36.796402 sshd-session[5969]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:36.806189 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:38412.service: Deactivated successfully. Jan 29 15:57:36.807815 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 15:57:36.808463 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Jan 29 15:57:36.817880 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:38414.service - OpenSSH per-connection server daemon (10.0.0.1:38414). Jan 29 15:57:36.818962 systemd-logind[1458]: Removed session 16. Jan 29 15:57:36.865918 sshd[5989]: Accepted publickey for core from 10.0.0.1 port 38414 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:36.867357 sshd-session[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:36.871277 systemd-logind[1458]: New session 17 of user core. Jan 29 15:57:36.880809 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 15:57:37.228728 sshd[5994]: Connection closed by 10.0.0.1 port 38414 Jan 29 15:57:37.228826 sshd-session[5989]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:37.236455 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:38414.service: Deactivated successfully. Jan 29 15:57:37.239454 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 15:57:37.240882 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Jan 29 15:57:37.245864 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:38428.service - OpenSSH per-connection server daemon (10.0.0.1:38428). Jan 29 15:57:37.247390 systemd-logind[1458]: Removed session 17. Jan 29 15:57:37.286347 sshd[6005]: Accepted publickey for core from 10.0.0.1 port 38428 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:37.288200 sshd-session[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:37.293051 systemd-logind[1458]: New session 18 of user core. Jan 29 15:57:37.307790 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 15:57:37.455022 sshd[6008]: Connection closed by 10.0.0.1 port 38428 Jan 29 15:57:37.455553 sshd-session[6005]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:37.458751 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:38428.service: Deactivated successfully. Jan 29 15:57:37.461762 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 15:57:37.462350 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Jan 29 15:57:37.463135 systemd-logind[1458]: Removed session 18. Jan 29 15:57:42.466973 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:55390.service - OpenSSH per-connection server daemon (10.0.0.1:55390). Jan 29 15:57:42.521114 sshd[6026]: Accepted publickey for core from 10.0.0.1 port 55390 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:42.522368 sshd-session[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:42.526826 systemd-logind[1458]: New session 19 of user core. Jan 29 15:57:42.534763 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 15:57:42.666454 sshd[6028]: Connection closed by 10.0.0.1 port 55390 Jan 29 15:57:42.667152 sshd-session[6026]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:42.670580 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:55390.service: Deactivated successfully. Jan 29 15:57:42.674700 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 15:57:42.677790 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Jan 29 15:57:42.678811 systemd-logind[1458]: Removed session 19. Jan 29 15:57:45.412949 containerd[1473]: time="2025-01-29T15:57:45.412789646Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\"" Jan 29 15:57:45.412949 containerd[1473]: time="2025-01-29T15:57:45.412909283Z" level=info msg="TearDown network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" successfully" Jan 29 15:57:45.412949 containerd[1473]: time="2025-01-29T15:57:45.412919882Z" level=info msg="StopPodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" returns successfully" Jan 29 15:57:45.413369 containerd[1473]: time="2025-01-29T15:57:45.413333669Z" level=info msg="RemovePodSandbox for \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\"" Jan 29 15:57:45.416602 containerd[1473]: time="2025-01-29T15:57:45.415229808Z" level=info msg="Forcibly stopping sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\"" Jan 29 15:57:45.416602 containerd[1473]: time="2025-01-29T15:57:45.415346484Z" level=info msg="TearDown network for sandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" successfully" Jan 29 15:57:45.419942 containerd[1473]: time="2025-01-29T15:57:45.419912896Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.420112 containerd[1473]: time="2025-01-29T15:57:45.420013733Z" level=info msg="RemovePodSandbox \"9c25f461ca2875ca3e2f7193c94829bd787b92b4ed8bb31910033e6f5da269d9\" returns successfully" Jan 29 15:57:45.420379 containerd[1473]: time="2025-01-29T15:57:45.420351322Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\"" Jan 29 15:57:45.420450 containerd[1473]: time="2025-01-29T15:57:45.420433199Z" level=info msg="TearDown network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" successfully" Jan 29 15:57:45.420450 containerd[1473]: time="2025-01-29T15:57:45.420444159Z" level=info msg="StopPodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" returns successfully" Jan 29 15:57:45.421329 containerd[1473]: time="2025-01-29T15:57:45.420777588Z" level=info msg="RemovePodSandbox for \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\"" Jan 29 15:57:45.421329 containerd[1473]: time="2025-01-29T15:57:45.420804907Z" level=info msg="Forcibly stopping sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\"" Jan 29 15:57:45.421329 containerd[1473]: time="2025-01-29T15:57:45.420870225Z" level=info msg="TearDown network for sandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" successfully" Jan 29 15:57:45.432668 containerd[1473]: time="2025-01-29T15:57:45.432638845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.432813 containerd[1473]: time="2025-01-29T15:57:45.432795520Z" level=info msg="RemovePodSandbox \"76941def0ff13ccb93485d5f7c9043d9ad547f4c29ec50ef40d0eb4a9aa160b4\" returns successfully" Jan 29 15:57:45.433186 containerd[1473]: time="2025-01-29T15:57:45.433158748Z" level=info msg="StopPodSandbox for \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\"" Jan 29 15:57:45.433285 containerd[1473]: time="2025-01-29T15:57:45.433240025Z" level=info msg="TearDown network for sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" successfully" Jan 29 15:57:45.433285 containerd[1473]: time="2025-01-29T15:57:45.433254425Z" level=info msg="StopPodSandbox for \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" returns successfully" Jan 29 15:57:45.434823 containerd[1473]: time="2025-01-29T15:57:45.433580534Z" level=info msg="RemovePodSandbox for \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\"" Jan 29 15:57:45.434823 containerd[1473]: time="2025-01-29T15:57:45.433620413Z" level=info msg="Forcibly stopping sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\"" Jan 29 15:57:45.434823 containerd[1473]: time="2025-01-29T15:57:45.433705890Z" level=info msg="TearDown network for sandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" successfully" Jan 29 15:57:45.436368 containerd[1473]: time="2025-01-29T15:57:45.436334605Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.436438 containerd[1473]: time="2025-01-29T15:57:45.436391083Z" level=info msg="RemovePodSandbox \"5dcc221c70982e05c90cacb9a34506e46130ae4e10a55baa87ae71bbb8a3ed81\" returns successfully" Jan 29 15:57:45.436700 containerd[1473]: time="2025-01-29T15:57:45.436672954Z" level=info msg="StopPodSandbox for \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\"" Jan 29 15:57:45.436797 containerd[1473]: time="2025-01-29T15:57:45.436778671Z" level=info msg="TearDown network for sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\" successfully" Jan 29 15:57:45.436797 containerd[1473]: time="2025-01-29T15:57:45.436793350Z" level=info msg="StopPodSandbox for \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\" returns successfully" Jan 29 15:57:45.437144 containerd[1473]: time="2025-01-29T15:57:45.437120980Z" level=info msg="RemovePodSandbox for \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\"" Jan 29 15:57:45.437199 containerd[1473]: time="2025-01-29T15:57:45.437147299Z" level=info msg="Forcibly stopping sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\"" Jan 29 15:57:45.437221 containerd[1473]: time="2025-01-29T15:57:45.437207377Z" level=info msg="TearDown network for sandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\" successfully" Jan 29 15:57:45.441463 containerd[1473]: time="2025-01-29T15:57:45.441394961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.441645 containerd[1473]: time="2025-01-29T15:57:45.441579635Z" level=info msg="RemovePodSandbox \"4cfae47ec5937109dd3547ed911b2ddf113e3d696e5d2385e442b36ff7256bd3\" returns successfully" Jan 29 15:57:45.442643 containerd[1473]: time="2025-01-29T15:57:45.442613002Z" level=info msg="StopPodSandbox for \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\"" Jan 29 15:57:45.442920 containerd[1473]: time="2025-01-29T15:57:45.442864474Z" level=info msg="TearDown network for sandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\" successfully" Jan 29 15:57:45.442920 containerd[1473]: time="2025-01-29T15:57:45.442901673Z" level=info msg="StopPodSandbox for \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\" returns successfully" Jan 29 15:57:45.445521 containerd[1473]: time="2025-01-29T15:57:45.444297868Z" level=info msg="RemovePodSandbox for \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\"" Jan 29 15:57:45.445521 containerd[1473]: time="2025-01-29T15:57:45.444325307Z" level=info msg="Forcibly stopping sandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\"" Jan 29 15:57:45.445521 containerd[1473]: time="2025-01-29T15:57:45.444395264Z" level=info msg="TearDown network for sandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\" successfully" Jan 29 15:57:45.447180 containerd[1473]: time="2025-01-29T15:57:45.447152255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.447308 containerd[1473]: time="2025-01-29T15:57:45.447292411Z" level=info msg="RemovePodSandbox \"77d0cc79b54fc023f249cd6014e1cb89225f07f74d5419cfe832195d5c7d1e7d\" returns successfully" Jan 29 15:57:45.447699 containerd[1473]: time="2025-01-29T15:57:45.447657999Z" level=info msg="StopPodSandbox for \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\"" Jan 29 15:57:45.447782 containerd[1473]: time="2025-01-29T15:57:45.447756716Z" level=info msg="TearDown network for sandbox \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\" successfully" Jan 29 15:57:45.447782 containerd[1473]: time="2025-01-29T15:57:45.447772035Z" level=info msg="StopPodSandbox for \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\" returns successfully" Jan 29 15:57:45.449249 containerd[1473]: time="2025-01-29T15:57:45.448043666Z" level=info msg="RemovePodSandbox for \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\"" Jan 29 15:57:45.449249 containerd[1473]: time="2025-01-29T15:57:45.448067146Z" level=info msg="Forcibly stopping sandbox \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\"" Jan 29 15:57:45.449249 containerd[1473]: time="2025-01-29T15:57:45.448123184Z" level=info msg="TearDown network for sandbox \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\" successfully" Jan 29 15:57:45.450582 containerd[1473]: time="2025-01-29T15:57:45.450557665Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.450699 containerd[1473]: time="2025-01-29T15:57:45.450671101Z" level=info msg="RemovePodSandbox \"96363fb94882d7ecf9d234172e89edee3e7d41db0d592a407ce4bf7c47a24677\" returns successfully" Jan 29 15:57:45.451036 containerd[1473]: time="2025-01-29T15:57:45.451012530Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\"" Jan 29 15:57:45.451140 containerd[1473]: time="2025-01-29T15:57:45.451124127Z" level=info msg="TearDown network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" successfully" Jan 29 15:57:45.451140 containerd[1473]: time="2025-01-29T15:57:45.451139446Z" level=info msg="StopPodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" returns successfully" Jan 29 15:57:45.452200 containerd[1473]: time="2025-01-29T15:57:45.451422037Z" level=info msg="RemovePodSandbox for \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\"" Jan 29 15:57:45.452200 containerd[1473]: time="2025-01-29T15:57:45.451447596Z" level=info msg="Forcibly stopping sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\"" Jan 29 15:57:45.452200 containerd[1473]: time="2025-01-29T15:57:45.451505155Z" level=info msg="TearDown network for sandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" successfully" Jan 29 15:57:45.454062 containerd[1473]: time="2025-01-29T15:57:45.454035633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.454167 containerd[1473]: time="2025-01-29T15:57:45.454151589Z" level=info msg="RemovePodSandbox \"2e7a3451c720ef2f222867691a1a347a9bb58459989f5b3ba8d14044f0f2b3f7\" returns successfully" Jan 29 15:57:45.454671 containerd[1473]: time="2025-01-29T15:57:45.454636173Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\"" Jan 29 15:57:45.454754 containerd[1473]: time="2025-01-29T15:57:45.454738690Z" level=info msg="TearDown network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" successfully" Jan 29 15:57:45.454754 containerd[1473]: time="2025-01-29T15:57:45.454751810Z" level=info msg="StopPodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" returns successfully" Jan 29 15:57:45.455616 containerd[1473]: time="2025-01-29T15:57:45.455044600Z" level=info msg="RemovePodSandbox for \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\"" Jan 29 15:57:45.455616 containerd[1473]: time="2025-01-29T15:57:45.455068919Z" level=info msg="Forcibly stopping sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\"" Jan 29 15:57:45.455616 containerd[1473]: time="2025-01-29T15:57:45.455133157Z" level=info msg="TearDown network for sandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" successfully" Jan 29 15:57:45.457398 containerd[1473]: time="2025-01-29T15:57:45.457363125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.457473 containerd[1473]: time="2025-01-29T15:57:45.457410604Z" level=info msg="RemovePodSandbox \"951f7ece42b241fac685d6f87f755c9ca65bab2207840045501e1cc287591147\" returns successfully" Jan 29 15:57:45.457965 containerd[1473]: time="2025-01-29T15:57:45.457724313Z" level=info msg="StopPodSandbox for \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\"" Jan 29 15:57:45.457965 containerd[1473]: time="2025-01-29T15:57:45.457813911Z" level=info msg="TearDown network for sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" successfully" Jan 29 15:57:45.457965 containerd[1473]: time="2025-01-29T15:57:45.457823710Z" level=info msg="StopPodSandbox for \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" returns successfully" Jan 29 15:57:45.458183 containerd[1473]: time="2025-01-29T15:57:45.458162299Z" level=info msg="RemovePodSandbox for \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\"" Jan 29 15:57:45.458259 containerd[1473]: time="2025-01-29T15:57:45.458244617Z" level=info msg="Forcibly stopping sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\"" Jan 29 15:57:45.458356 containerd[1473]: time="2025-01-29T15:57:45.458342293Z" level=info msg="TearDown network for sandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" successfully" Jan 29 15:57:45.467023 containerd[1473]: time="2025-01-29T15:57:45.466956775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.467023 containerd[1473]: time="2025-01-29T15:57:45.467007053Z" level=info msg="RemovePodSandbox \"b9ce729d37f259f8f666d11b67f5e2592c0ae4928fab9a26948554239619bb84\" returns successfully" Jan 29 15:57:45.467367 containerd[1473]: time="2025-01-29T15:57:45.467339403Z" level=info msg="StopPodSandbox for \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\"" Jan 29 15:57:45.467447 containerd[1473]: time="2025-01-29T15:57:45.467431200Z" level=info msg="TearDown network for sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\" successfully" Jan 29 15:57:45.467476 containerd[1473]: time="2025-01-29T15:57:45.467445999Z" level=info msg="StopPodSandbox for \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\" returns successfully" Jan 29 15:57:45.478353 containerd[1473]: time="2025-01-29T15:57:45.477134526Z" level=info msg="RemovePodSandbox for \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\"" Jan 29 15:57:45.478353 containerd[1473]: time="2025-01-29T15:57:45.477173445Z" level=info msg="Forcibly stopping sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\"" Jan 29 15:57:45.478353 containerd[1473]: time="2025-01-29T15:57:45.477251682Z" level=info msg="TearDown network for sandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\" successfully" Jan 29 15:57:45.480457 containerd[1473]: time="2025-01-29T15:57:45.480400020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.480517 containerd[1473]: time="2025-01-29T15:57:45.480488017Z" level=info msg="RemovePodSandbox \"ceed74ed445665e7d1b2378e0fb3aad13b15cd9c8cf16202c84db0466b2963a1\" returns successfully" Jan 29 15:57:45.483328 containerd[1473]: time="2025-01-29T15:57:45.482880220Z" level=info msg="StopPodSandbox for \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\"" Jan 29 15:57:45.483328 containerd[1473]: time="2025-01-29T15:57:45.482992096Z" level=info msg="TearDown network for sandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\" successfully" Jan 29 15:57:45.483328 containerd[1473]: time="2025-01-29T15:57:45.483002456Z" level=info msg="StopPodSandbox for \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\" returns successfully" Jan 29 15:57:45.493623 containerd[1473]: time="2025-01-29T15:57:45.492188479Z" level=info msg="RemovePodSandbox for \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\"" Jan 29 15:57:45.493623 containerd[1473]: time="2025-01-29T15:57:45.492219958Z" level=info msg="Forcibly stopping sandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\"" Jan 29 15:57:45.493623 containerd[1473]: time="2025-01-29T15:57:45.492284716Z" level=info msg="TearDown network for sandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\" successfully" Jan 29 15:57:45.495529 containerd[1473]: time="2025-01-29T15:57:45.495492172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.495625 containerd[1473]: time="2025-01-29T15:57:45.495547010Z" level=info msg="RemovePodSandbox \"e7d6020ab320c2e80947adbe87c63439f2adfced753c46c335096d9618f90805\" returns successfully" Jan 29 15:57:45.496096 containerd[1473]: time="2025-01-29T15:57:45.495949197Z" level=info msg="StopPodSandbox for \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\"" Jan 29 15:57:45.496096 containerd[1473]: time="2025-01-29T15:57:45.496034595Z" level=info msg="TearDown network for sandbox \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\" successfully" Jan 29 15:57:45.496096 containerd[1473]: time="2025-01-29T15:57:45.496043994Z" level=info msg="StopPodSandbox for \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\" returns successfully" Jan 29 15:57:45.496999 containerd[1473]: time="2025-01-29T15:57:45.496844328Z" level=info msg="RemovePodSandbox for \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\"" Jan 29 15:57:45.496999 containerd[1473]: time="2025-01-29T15:57:45.496875487Z" level=info msg="Forcibly stopping sandbox \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\"" Jan 29 15:57:45.496999 containerd[1473]: time="2025-01-29T15:57:45.496937525Z" level=info msg="TearDown network for sandbox \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\" successfully" Jan 29 15:57:45.499645 containerd[1473]: time="2025-01-29T15:57:45.499337848Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.499645 containerd[1473]: time="2025-01-29T15:57:45.499385446Z" level=info msg="RemovePodSandbox \"aa3953b94bc38f25832d5dd9fbdd1603f6edb2a524f97286f20ba4f9469875b9\" returns successfully" Jan 29 15:57:45.500050 containerd[1473]: time="2025-01-29T15:57:45.500024066Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\"" Jan 29 15:57:45.500467 containerd[1473]: time="2025-01-29T15:57:45.500447492Z" level=info msg="TearDown network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" successfully" Jan 29 15:57:45.500887 containerd[1473]: time="2025-01-29T15:57:45.500631126Z" level=info msg="StopPodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" returns successfully" Jan 29 15:57:45.501448 containerd[1473]: time="2025-01-29T15:57:45.501424540Z" level=info msg="RemovePodSandbox for \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\"" Jan 29 15:57:45.501448 containerd[1473]: time="2025-01-29T15:57:45.501447460Z" level=info msg="Forcibly stopping sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\"" Jan 29 15:57:45.501530 containerd[1473]: time="2025-01-29T15:57:45.501505978Z" level=info msg="TearDown network for sandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" successfully" Jan 29 15:57:45.507199 containerd[1473]: time="2025-01-29T15:57:45.506926682Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.507199 containerd[1473]: time="2025-01-29T15:57:45.506980481Z" level=info msg="RemovePodSandbox \"8d0cbc701b87e69fe76621b6fe38e6f91f9e41d2bb17ecf6b0b6c1efa479fa14\" returns successfully" Jan 29 15:57:45.507453 containerd[1473]: time="2025-01-29T15:57:45.507285871Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\"" Jan 29 15:57:45.507453 containerd[1473]: time="2025-01-29T15:57:45.507366548Z" level=info msg="TearDown network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" successfully" Jan 29 15:57:45.507453 containerd[1473]: time="2025-01-29T15:57:45.507376948Z" level=info msg="StopPodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" returns successfully" Jan 29 15:57:45.507821 containerd[1473]: time="2025-01-29T15:57:45.507793174Z" level=info msg="RemovePodSandbox for \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\"" Jan 29 15:57:45.507821 containerd[1473]: time="2025-01-29T15:57:45.507821254Z" level=info msg="Forcibly stopping sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\"" Jan 29 15:57:45.507920 containerd[1473]: time="2025-01-29T15:57:45.507883651Z" level=info msg="TearDown network for sandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" successfully" Jan 29 15:57:45.518191 containerd[1473]: time="2025-01-29T15:57:45.517601177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.518191 containerd[1473]: time="2025-01-29T15:57:45.517655696Z" level=info msg="RemovePodSandbox \"163f78b4e4f5882ec249fdc3448f2155f806a88d626ca4b63b5996a51582825c\" returns successfully" Jan 29 15:57:45.518191 containerd[1473]: time="2025-01-29T15:57:45.517989445Z" level=info msg="StopPodSandbox for \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\"" Jan 29 15:57:45.518191 containerd[1473]: time="2025-01-29T15:57:45.518064482Z" level=info msg="TearDown network for sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" successfully" Jan 29 15:57:45.518191 containerd[1473]: time="2025-01-29T15:57:45.518073042Z" level=info msg="StopPodSandbox for \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" returns successfully" Jan 29 15:57:45.518374 containerd[1473]: time="2025-01-29T15:57:45.518331274Z" level=info msg="RemovePodSandbox for \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\"" Jan 29 15:57:45.518374 containerd[1473]: time="2025-01-29T15:57:45.518351833Z" level=info msg="Forcibly stopping sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\"" Jan 29 15:57:45.518458 containerd[1473]: time="2025-01-29T15:57:45.518436310Z" level=info msg="TearDown network for sandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" successfully" Jan 29 15:57:45.520808 containerd[1473]: time="2025-01-29T15:57:45.520771355Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.520869 containerd[1473]: time="2025-01-29T15:57:45.520821633Z" level=info msg="RemovePodSandbox \"b53ba2e2ee9b43bb9e4b82f7d27cd68862fa9f66f6417cfd33b04d6cf2e55169\" returns successfully" Jan 29 15:57:45.521310 containerd[1473]: time="2025-01-29T15:57:45.521157502Z" level=info msg="StopPodSandbox for \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\"" Jan 29 15:57:45.521310 containerd[1473]: time="2025-01-29T15:57:45.521240540Z" level=info msg="TearDown network for sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\" successfully" Jan 29 15:57:45.521310 containerd[1473]: time="2025-01-29T15:57:45.521249699Z" level=info msg="StopPodSandbox for \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\" returns successfully" Jan 29 15:57:45.521551 containerd[1473]: time="2025-01-29T15:57:45.521528650Z" level=info msg="RemovePodSandbox for \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\"" Jan 29 15:57:45.521601 containerd[1473]: time="2025-01-29T15:57:45.521556169Z" level=info msg="Forcibly stopping sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\"" Jan 29 15:57:45.521653 containerd[1473]: time="2025-01-29T15:57:45.521638687Z" level=info msg="TearDown network for sandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\" successfully" Jan 29 15:57:45.524055 containerd[1473]: time="2025-01-29T15:57:45.524024090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.524117 containerd[1473]: time="2025-01-29T15:57:45.524075288Z" level=info msg="RemovePodSandbox \"c49996fa2821194fabeb2746b97dcf78060a4c742a9ac43bd849db70cc4f44d7\" returns successfully" Jan 29 15:57:45.524409 containerd[1473]: time="2025-01-29T15:57:45.524360399Z" level=info msg="StopPodSandbox for \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\"" Jan 29 15:57:45.524457 containerd[1473]: time="2025-01-29T15:57:45.524442116Z" level=info msg="TearDown network for sandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\" successfully" Jan 29 15:57:45.524457 containerd[1473]: time="2025-01-29T15:57:45.524455956Z" level=info msg="StopPodSandbox for \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\" returns successfully" Jan 29 15:57:45.525669 containerd[1473]: time="2025-01-29T15:57:45.524776625Z" level=info msg="RemovePodSandbox for \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\"" Jan 29 15:57:45.525669 containerd[1473]: time="2025-01-29T15:57:45.524807144Z" level=info msg="Forcibly stopping sandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\"" Jan 29 15:57:45.525669 containerd[1473]: time="2025-01-29T15:57:45.524870262Z" level=info msg="TearDown network for sandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\" successfully" Jan 29 15:57:45.527312 containerd[1473]: time="2025-01-29T15:57:45.527274144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.527355 containerd[1473]: time="2025-01-29T15:57:45.527329903Z" level=info msg="RemovePodSandbox \"d9bfef3b99ff82ad4b4b4a80a0addcb75e4d13d3c98c279701dd4ab8de34cb70\" returns successfully" Jan 29 15:57:45.527779 containerd[1473]: time="2025-01-29T15:57:45.527618293Z" level=info msg="StopPodSandbox for \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\"" Jan 29 15:57:45.527779 containerd[1473]: time="2025-01-29T15:57:45.527699291Z" level=info msg="TearDown network for sandbox \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\" successfully" Jan 29 15:57:45.527779 containerd[1473]: time="2025-01-29T15:57:45.527710570Z" level=info msg="StopPodSandbox for \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\" returns successfully" Jan 29 15:57:45.528084 containerd[1473]: time="2025-01-29T15:57:45.528047799Z" level=info msg="RemovePodSandbox for \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\"" Jan 29 15:57:45.528084 containerd[1473]: time="2025-01-29T15:57:45.528078598Z" level=info msg="Forcibly stopping sandbox \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\"" Jan 29 15:57:45.528154 containerd[1473]: time="2025-01-29T15:57:45.528139636Z" level=info msg="TearDown network for sandbox \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\" successfully" Jan 29 15:57:45.530791 containerd[1473]: time="2025-01-29T15:57:45.530742152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.530878 containerd[1473]: time="2025-01-29T15:57:45.530795591Z" level=info msg="RemovePodSandbox \"c75091bb5353d1013781dc72d15af78c53946a442eb7eafdd17b41a6214754fa\" returns successfully" Jan 29 15:57:45.531121 containerd[1473]: time="2025-01-29T15:57:45.531087501Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\"" Jan 29 15:57:45.531195 containerd[1473]: time="2025-01-29T15:57:45.531172738Z" level=info msg="TearDown network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" successfully" Jan 29 15:57:45.531233 containerd[1473]: time="2025-01-29T15:57:45.531221417Z" level=info msg="StopPodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" returns successfully" Jan 29 15:57:45.531489 containerd[1473]: time="2025-01-29T15:57:45.531459969Z" level=info msg="RemovePodSandbox for \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\"" Jan 29 15:57:45.531512 containerd[1473]: time="2025-01-29T15:57:45.531490248Z" level=info msg="Forcibly stopping sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\"" Jan 29 15:57:45.531563 containerd[1473]: time="2025-01-29T15:57:45.531549846Z" level=info msg="TearDown network for sandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" successfully" Jan 29 15:57:45.533954 containerd[1473]: time="2025-01-29T15:57:45.533912890Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.533986 containerd[1473]: time="2025-01-29T15:57:45.533969888Z" level=info msg="RemovePodSandbox \"fca2fa7bcb59687dfd9a13ed5fd19aa621e35304f830f78233ee8809f51b02cc\" returns successfully" Jan 29 15:57:45.534329 containerd[1473]: time="2025-01-29T15:57:45.534297397Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\"" Jan 29 15:57:45.534409 containerd[1473]: time="2025-01-29T15:57:45.534387754Z" level=info msg="TearDown network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" successfully" Jan 29 15:57:45.534409 containerd[1473]: time="2025-01-29T15:57:45.534402874Z" level=info msg="StopPodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" returns successfully" Jan 29 15:57:45.534669 containerd[1473]: time="2025-01-29T15:57:45.534643546Z" level=info msg="RemovePodSandbox for \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\"" Jan 29 15:57:45.534705 containerd[1473]: time="2025-01-29T15:57:45.534668665Z" level=info msg="Forcibly stopping sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\"" Jan 29 15:57:45.534754 containerd[1473]: time="2025-01-29T15:57:45.534736623Z" level=info msg="TearDown network for sandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" successfully" Jan 29 15:57:45.537265 containerd[1473]: time="2025-01-29T15:57:45.537190184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.537305 containerd[1473]: time="2025-01-29T15:57:45.537283381Z" level=info msg="RemovePodSandbox \"ddb11314924cf73c1e4c91091851e1fa4d6994eb7f08a0d365e487d4eb489a46\" returns successfully" Jan 29 15:57:45.537672 containerd[1473]: time="2025-01-29T15:57:45.537640489Z" level=info msg="StopPodSandbox for \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\"" Jan 29 15:57:45.537761 containerd[1473]: time="2025-01-29T15:57:45.537743206Z" level=info msg="TearDown network for sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" successfully" Jan 29 15:57:45.537761 containerd[1473]: time="2025-01-29T15:57:45.537758925Z" level=info msg="StopPodSandbox for \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" returns successfully" Jan 29 15:57:45.538066 containerd[1473]: time="2025-01-29T15:57:45.538045236Z" level=info msg="RemovePodSandbox for \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\"" Jan 29 15:57:45.538096 containerd[1473]: time="2025-01-29T15:57:45.538070475Z" level=info msg="Forcibly stopping sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\"" Jan 29 15:57:45.538138 containerd[1473]: time="2025-01-29T15:57:45.538126434Z" level=info msg="TearDown network for sandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" successfully" Jan 29 15:57:45.540409 containerd[1473]: time="2025-01-29T15:57:45.540371361Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.540449 containerd[1473]: time="2025-01-29T15:57:45.540424679Z" level=info msg="RemovePodSandbox \"d3ad61c82f0c68cebc4c4d986134e1b8c4be5a4b5c4bca958bf30ed7fced63a9\" returns successfully" Jan 29 15:57:45.540711 containerd[1473]: time="2025-01-29T15:57:45.540687791Z" level=info msg="StopPodSandbox for \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\"" Jan 29 15:57:45.540813 containerd[1473]: time="2025-01-29T15:57:45.540788467Z" level=info msg="TearDown network for sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\" successfully" Jan 29 15:57:45.540813 containerd[1473]: time="2025-01-29T15:57:45.540805867Z" level=info msg="StopPodSandbox for \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\" returns successfully" Jan 29 15:57:45.541201 containerd[1473]: time="2025-01-29T15:57:45.541160415Z" level=info msg="RemovePodSandbox for \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\"" Jan 29 15:57:45.541236 containerd[1473]: time="2025-01-29T15:57:45.541205614Z" level=info msg="Forcibly stopping sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\"" Jan 29 15:57:45.541296 containerd[1473]: time="2025-01-29T15:57:45.541280772Z" level=info msg="TearDown network for sandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\" successfully" Jan 29 15:57:45.543527 containerd[1473]: time="2025-01-29T15:57:45.543482380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.543558 containerd[1473]: time="2025-01-29T15:57:45.543539499Z" level=info msg="RemovePodSandbox \"6c8b3a974f076eb547fce19fa3e9548369ac376af43cd3b12dc47792bcf5b55f\" returns successfully" Jan 29 15:57:45.543907 containerd[1473]: time="2025-01-29T15:57:45.543853848Z" level=info msg="StopPodSandbox for \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\"" Jan 29 15:57:45.543991 containerd[1473]: time="2025-01-29T15:57:45.543974444Z" level=info msg="TearDown network for sandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\" successfully" Jan 29 15:57:45.543991 containerd[1473]: time="2025-01-29T15:57:45.543989404Z" level=info msg="StopPodSandbox for \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\" returns successfully" Jan 29 15:57:45.544355 containerd[1473]: time="2025-01-29T15:57:45.544319433Z" level=info msg="RemovePodSandbox for \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\"" Jan 29 15:57:45.544355 containerd[1473]: time="2025-01-29T15:57:45.544347272Z" level=info msg="Forcibly stopping sandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\"" Jan 29 15:57:45.544429 containerd[1473]: time="2025-01-29T15:57:45.544403711Z" level=info msg="TearDown network for sandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\" successfully" Jan 29 15:57:45.546848 containerd[1473]: time="2025-01-29T15:57:45.546805273Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.546893 containerd[1473]: time="2025-01-29T15:57:45.546865591Z" level=info msg="RemovePodSandbox \"2a8ad66df436335a4222d5b02f7e192102d55c7ded528cf9663f094218018323\" returns successfully" Jan 29 15:57:45.547247 containerd[1473]: time="2025-01-29T15:57:45.547219940Z" level=info msg="StopPodSandbox for \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\"" Jan 29 15:57:45.547326 containerd[1473]: time="2025-01-29T15:57:45.547309537Z" level=info msg="TearDown network for sandbox \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\" successfully" Jan 29 15:57:45.547326 containerd[1473]: time="2025-01-29T15:57:45.547323176Z" level=info msg="StopPodSandbox for \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\" returns successfully" Jan 29 15:57:45.547644 containerd[1473]: time="2025-01-29T15:57:45.547622886Z" level=info msg="RemovePodSandbox for \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\"" Jan 29 15:57:45.547683 containerd[1473]: time="2025-01-29T15:57:45.547651166Z" level=info msg="Forcibly stopping sandbox \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\"" Jan 29 15:57:45.547722 containerd[1473]: time="2025-01-29T15:57:45.547709524Z" level=info msg="TearDown network for sandbox \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\" successfully" Jan 29 15:57:45.550163 containerd[1473]: time="2025-01-29T15:57:45.550026649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.550163 containerd[1473]: time="2025-01-29T15:57:45.550082127Z" level=info msg="RemovePodSandbox \"64eb0165895a9caede21ad3d9ef0dbcb7332f616ad6bdddbcec5b863fb9f60f3\" returns successfully" Jan 29 15:57:45.550997 containerd[1473]: time="2025-01-29T15:57:45.550944819Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\"" Jan 29 15:57:45.551065 containerd[1473]: time="2025-01-29T15:57:45.551042376Z" level=info msg="TearDown network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" successfully" Jan 29 15:57:45.551065 containerd[1473]: time="2025-01-29T15:57:45.551055615Z" level=info msg="StopPodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" returns successfully" Jan 29 15:57:45.551381 containerd[1473]: time="2025-01-29T15:57:45.551353086Z" level=info msg="RemovePodSandbox for \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\"" Jan 29 15:57:45.551430 containerd[1473]: time="2025-01-29T15:57:45.551386565Z" level=info msg="Forcibly stopping sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\"" Jan 29 15:57:45.551467 containerd[1473]: time="2025-01-29T15:57:45.551451203Z" level=info msg="TearDown network for sandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" successfully" Jan 29 15:57:45.554428 containerd[1473]: time="2025-01-29T15:57:45.554375228Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.554503 containerd[1473]: time="2025-01-29T15:57:45.554435426Z" level=info msg="RemovePodSandbox \"e8c6d4c66d83dac7ded1dd9a6b2ed596754aafa30a022828ad265239d2a71583\" returns successfully" Jan 29 15:57:45.554886 containerd[1473]: time="2025-01-29T15:57:45.554850253Z" level=info msg="StopPodSandbox for \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\"" Jan 29 15:57:45.554965 containerd[1473]: time="2025-01-29T15:57:45.554949890Z" level=info msg="TearDown network for sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" successfully" Jan 29 15:57:45.554965 containerd[1473]: time="2025-01-29T15:57:45.554963689Z" level=info msg="StopPodSandbox for \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" returns successfully" Jan 29 15:57:45.555241 containerd[1473]: time="2025-01-29T15:57:45.555220361Z" level=info msg="RemovePodSandbox for \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\"" Jan 29 15:57:45.555284 containerd[1473]: time="2025-01-29T15:57:45.555247040Z" level=info msg="Forcibly stopping sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\"" Jan 29 15:57:45.555324 containerd[1473]: time="2025-01-29T15:57:45.555308958Z" level=info msg="TearDown network for sandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" successfully" Jan 29 15:57:45.557594 containerd[1473]: time="2025-01-29T15:57:45.557555845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.557653 containerd[1473]: time="2025-01-29T15:57:45.557626923Z" level=info msg="RemovePodSandbox \"17d2fc312dd2b29c914dcc661a9b6457b31fe07ddbb7ed46e6fde3a2302a159c\" returns successfully" Jan 29 15:57:45.557963 containerd[1473]: time="2025-01-29T15:57:45.557932433Z" level=info msg="StopPodSandbox for \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\"" Jan 29 15:57:45.558036 containerd[1473]: time="2025-01-29T15:57:45.558018510Z" level=info msg="TearDown network for sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\" successfully" Jan 29 15:57:45.558036 containerd[1473]: time="2025-01-29T15:57:45.558033310Z" level=info msg="StopPodSandbox for \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\" returns successfully" Jan 29 15:57:45.558534 containerd[1473]: time="2025-01-29T15:57:45.558249303Z" level=info msg="RemovePodSandbox for \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\"" Jan 29 15:57:45.558534 containerd[1473]: time="2025-01-29T15:57:45.558279982Z" level=info msg="Forcibly stopping sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\"" Jan 29 15:57:45.558534 containerd[1473]: time="2025-01-29T15:57:45.558341220Z" level=info msg="TearDown network for sandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\" successfully" Jan 29 15:57:45.560848 containerd[1473]: time="2025-01-29T15:57:45.560810020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.560898 containerd[1473]: time="2025-01-29T15:57:45.560864658Z" level=info msg="RemovePodSandbox \"9c992ab55ff44a7f4af9e80fa4ee4c4d15e3fda81f20027795489844edfdd401\" returns successfully" Jan 29 15:57:45.561144 containerd[1473]: time="2025-01-29T15:57:45.561108850Z" level=info msg="StopPodSandbox for \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\"" Jan 29 15:57:45.561217 containerd[1473]: time="2025-01-29T15:57:45.561194208Z" level=info msg="TearDown network for sandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\" successfully" Jan 29 15:57:45.561217 containerd[1473]: time="2025-01-29T15:57:45.561210047Z" level=info msg="StopPodSandbox for \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\" returns successfully" Jan 29 15:57:45.561444 containerd[1473]: time="2025-01-29T15:57:45.561413121Z" level=info msg="RemovePodSandbox for \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\"" Jan 29 15:57:45.561471 containerd[1473]: time="2025-01-29T15:57:45.561443080Z" level=info msg="Forcibly stopping sandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\"" Jan 29 15:57:45.561520 containerd[1473]: time="2025-01-29T15:57:45.561507198Z" level=info msg="TearDown network for sandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\" successfully" Jan 29 15:57:45.563948 containerd[1473]: time="2025-01-29T15:57:45.563910920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.563989 containerd[1473]: time="2025-01-29T15:57:45.563962558Z" level=info msg="RemovePodSandbox \"cfe9c8cc90cc2f699da920ef6676a8a34bc5e57b20d90a7d0cdf32114f1529da\" returns successfully" Jan 29 15:57:45.564264 containerd[1473]: time="2025-01-29T15:57:45.564224710Z" level=info msg="StopPodSandbox for \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\"" Jan 29 15:57:45.564338 containerd[1473]: time="2025-01-29T15:57:45.564317427Z" level=info msg="TearDown network for sandbox \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\" successfully" Jan 29 15:57:45.564338 containerd[1473]: time="2025-01-29T15:57:45.564332786Z" level=info msg="StopPodSandbox for \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\" returns successfully" Jan 29 15:57:45.564602 containerd[1473]: time="2025-01-29T15:57:45.564572778Z" level=info msg="RemovePodSandbox for \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\"" Jan 29 15:57:45.564626 containerd[1473]: time="2025-01-29T15:57:45.564604857Z" level=info msg="Forcibly stopping sandbox \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\"" Jan 29 15:57:45.564675 containerd[1473]: time="2025-01-29T15:57:45.564662136Z" level=info msg="TearDown network for sandbox \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\" successfully" Jan 29 15:57:45.566842 containerd[1473]: time="2025-01-29T15:57:45.566810586Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.566867 containerd[1473]: time="2025-01-29T15:57:45.566859424Z" level=info msg="RemovePodSandbox \"29644099256f2386ba5c69768cb0e005aac536e6a16ab5a5e99b1f9c3764c6f8\" returns successfully" Jan 29 15:57:45.567194 containerd[1473]: time="2025-01-29T15:57:45.567162775Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\"" Jan 29 15:57:45.567275 containerd[1473]: time="2025-01-29T15:57:45.567254292Z" level=info msg="TearDown network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" successfully" Jan 29 15:57:45.567275 containerd[1473]: time="2025-01-29T15:57:45.567269331Z" level=info msg="StopPodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" returns successfully" Jan 29 15:57:45.567562 containerd[1473]: time="2025-01-29T15:57:45.567528443Z" level=info msg="RemovePodSandbox for \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\"" Jan 29 15:57:45.567604 containerd[1473]: time="2025-01-29T15:57:45.567561322Z" level=info msg="Forcibly stopping sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\"" Jan 29 15:57:45.567646 containerd[1473]: time="2025-01-29T15:57:45.567632319Z" level=info msg="TearDown network for sandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" successfully" Jan 29 15:57:45.580636 containerd[1473]: time="2025-01-29T15:57:45.580581261Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.580670 containerd[1473]: time="2025-01-29T15:57:45.580659658Z" level=info msg="RemovePodSandbox \"e6ea8d2c102edcecd2243217920e29653068da4a60a192e672b718b3715b3061\" returns successfully" Jan 29 15:57:45.581066 containerd[1473]: time="2025-01-29T15:57:45.581036646Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\"" Jan 29 15:57:45.581155 containerd[1473]: time="2025-01-29T15:57:45.581137643Z" level=info msg="TearDown network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" successfully" Jan 29 15:57:45.581155 containerd[1473]: time="2025-01-29T15:57:45.581152922Z" level=info msg="StopPodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" returns successfully" Jan 29 15:57:45.581419 containerd[1473]: time="2025-01-29T15:57:45.581389955Z" level=info msg="RemovePodSandbox for \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\"" Jan 29 15:57:45.581419 containerd[1473]: time="2025-01-29T15:57:45.581410914Z" level=info msg="Forcibly stopping sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\"" Jan 29 15:57:45.581491 containerd[1473]: time="2025-01-29T15:57:45.581466432Z" level=info msg="TearDown network for sandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" successfully" Jan 29 15:57:45.584046 containerd[1473]: time="2025-01-29T15:57:45.583987871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.584046 containerd[1473]: time="2025-01-29T15:57:45.584041669Z" level=info msg="RemovePodSandbox \"2e4d8789262d680e00e88795078786ee5b058e8c20656ddb44a04a5c22a32787\" returns successfully" Jan 29 15:57:45.584354 containerd[1473]: time="2025-01-29T15:57:45.584310660Z" level=info msg="StopPodSandbox for \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\"" Jan 29 15:57:45.584421 containerd[1473]: time="2025-01-29T15:57:45.584401657Z" level=info msg="TearDown network for sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" successfully" Jan 29 15:57:45.584421 containerd[1473]: time="2025-01-29T15:57:45.584416177Z" level=info msg="StopPodSandbox for \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" returns successfully" Jan 29 15:57:45.584650 containerd[1473]: time="2025-01-29T15:57:45.584620250Z" level=info msg="RemovePodSandbox for \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\"" Jan 29 15:57:45.584650 containerd[1473]: time="2025-01-29T15:57:45.584646209Z" level=info msg="Forcibly stopping sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\"" Jan 29 15:57:45.584728 containerd[1473]: time="2025-01-29T15:57:45.584706447Z" level=info msg="TearDown network for sandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" successfully" Jan 29 15:57:45.587598 containerd[1473]: time="2025-01-29T15:57:45.587387001Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.587598 containerd[1473]: time="2025-01-29T15:57:45.587465918Z" level=info msg="RemovePodSandbox \"2aa5a54b817c9b4f50ad1f30c35e951b008173f2adae66a3d6292ecaf78f4671\" returns successfully" Jan 29 15:57:45.587864 containerd[1473]: time="2025-01-29T15:57:45.587833506Z" level=info msg="StopPodSandbox for \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\"" Jan 29 15:57:45.588533 containerd[1473]: time="2025-01-29T15:57:45.587968222Z" level=info msg="TearDown network for sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\" successfully" Jan 29 15:57:45.588533 containerd[1473]: time="2025-01-29T15:57:45.587985021Z" level=info msg="StopPodSandbox for \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\" returns successfully" Jan 29 15:57:45.590153 containerd[1473]: time="2025-01-29T15:57:45.590119352Z" level=info msg="RemovePodSandbox for \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\"" Jan 29 15:57:45.590237 containerd[1473]: time="2025-01-29T15:57:45.590224229Z" level=info msg="Forcibly stopping sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\"" Jan 29 15:57:45.590354 containerd[1473]: time="2025-01-29T15:57:45.590339065Z" level=info msg="TearDown network for sandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\" successfully" Jan 29 15:57:45.592661 containerd[1473]: time="2025-01-29T15:57:45.592634791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.592775 containerd[1473]: time="2025-01-29T15:57:45.592757267Z" level=info msg="RemovePodSandbox \"aa2fa21d44e5cf31b9e02024850c23e91f9b3f0ee334df38944f054c3900c63d\" returns successfully" Jan 29 15:57:45.593224 containerd[1473]: time="2025-01-29T15:57:45.593201013Z" level=info msg="StopPodSandbox for \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\"" Jan 29 15:57:45.593302 containerd[1473]: time="2025-01-29T15:57:45.593286050Z" level=info msg="TearDown network for sandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\" successfully" Jan 29 15:57:45.593326 containerd[1473]: time="2025-01-29T15:57:45.593300529Z" level=info msg="StopPodSandbox for \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\" returns successfully" Jan 29 15:57:45.593570 containerd[1473]: time="2025-01-29T15:57:45.593549121Z" level=info msg="RemovePodSandbox for \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\"" Jan 29 15:57:45.593612 containerd[1473]: time="2025-01-29T15:57:45.593575081Z" level=info msg="Forcibly stopping sandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\"" Jan 29 15:57:45.593665 containerd[1473]: time="2025-01-29T15:57:45.593650718Z" level=info msg="TearDown network for sandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\" successfully" Jan 29 15:57:45.596026 containerd[1473]: time="2025-01-29T15:57:45.595990083Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.596086 containerd[1473]: time="2025-01-29T15:57:45.596038601Z" level=info msg="RemovePodSandbox \"2ee6ba03f2beb2df7dfc78047df1705a794879d0ced756cba90f658e6512d52a\" returns successfully" Jan 29 15:57:45.596418 containerd[1473]: time="2025-01-29T15:57:45.596398629Z" level=info msg="StopPodSandbox for \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\"" Jan 29 15:57:45.596489 containerd[1473]: time="2025-01-29T15:57:45.596475587Z" level=info msg="TearDown network for sandbox \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\" successfully" Jan 29 15:57:45.596513 containerd[1473]: time="2025-01-29T15:57:45.596487986Z" level=info msg="StopPodSandbox for \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\" returns successfully" Jan 29 15:57:45.596782 containerd[1473]: time="2025-01-29T15:57:45.596761098Z" level=info msg="RemovePodSandbox for \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\"" Jan 29 15:57:45.596878 containerd[1473]: time="2025-01-29T15:57:45.596787937Z" level=info msg="Forcibly stopping sandbox \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\"" Jan 29 15:57:45.596878 containerd[1473]: time="2025-01-29T15:57:45.596847255Z" level=info msg="TearDown network for sandbox \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\" successfully" Jan 29 15:57:45.599244 containerd[1473]: time="2025-01-29T15:57:45.599213258Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 15:57:45.599288 containerd[1473]: time="2025-01-29T15:57:45.599264257Z" level=info msg="RemovePodSandbox \"3209bc2f63e2e97fd654ec50599c37b0fa0e0450245d03e640e8c1f3ea6daf8e\" returns successfully" Jan 29 15:57:45.892149 kubelet[2572]: E0129 15:57:45.891006 2572 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 15:57:45.989159 kubelet[2572]: I0129 15:57:45.989109 2572 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 15:57:47.680001 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:55398.service - OpenSSH per-connection server daemon (10.0.0.1:55398). Jan 29 15:57:47.722523 sshd[6070]: Accepted publickey for core from 10.0.0.1 port 55398 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:47.723615 sshd-session[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:47.727199 systemd-logind[1458]: New session 20 of user core. Jan 29 15:57:47.737896 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 15:57:47.853716 sshd[6072]: Connection closed by 10.0.0.1 port 55398 Jan 29 15:57:47.854052 sshd-session[6070]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:47.857143 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:55398.service: Deactivated successfully. Jan 29 15:57:47.859129 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 15:57:47.861170 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Jan 29 15:57:47.862040 systemd-logind[1458]: Removed session 20. Jan 29 15:57:51.350024 containerd[1473]: time="2025-01-29T15:57:51.349962758Z" level=info msg="StopContainer for \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\" with timeout 300 (s)" Jan 29 15:57:51.351895 containerd[1473]: time="2025-01-29T15:57:51.351862108Z" level=info msg="Stop container \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\" with signal terminated" Jan 29 15:57:51.419824 containerd[1473]: time="2025-01-29T15:57:51.419781412Z" level=info msg="StopContainer for \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\" with timeout 30 (s)" Jan 29 15:57:51.420741 containerd[1473]: time="2025-01-29T15:57:51.420702188Z" level=info msg="Stop container \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\" with signal terminated" Jan 29 15:57:51.442332 systemd[1]: cri-containerd-92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812.scope: Deactivated successfully. Jan 29 15:57:51.479651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812-rootfs.mount: Deactivated successfully. Jan 29 15:57:51.487304 containerd[1473]: time="2025-01-29T15:57:51.487234490Z" level=info msg="shim disconnected" id=92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812 namespace=k8s.io Jan 29 15:57:51.487304 containerd[1473]: time="2025-01-29T15:57:51.487303008Z" level=warning msg="cleaning up after shim disconnected" id=92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812 namespace=k8s.io Jan 29 15:57:51.487487 containerd[1473]: time="2025-01-29T15:57:51.487315407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 15:57:51.516520 containerd[1473]: time="2025-01-29T15:57:51.516458109Z" level=info msg="StopContainer for \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\" returns successfully" Jan 29 15:57:51.517561 containerd[1473]: time="2025-01-29T15:57:51.517027133Z" level=info msg="StopPodSandbox for \"1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0\"" Jan 29 15:57:51.517561 containerd[1473]: time="2025-01-29T15:57:51.517058493Z" level=info msg="Container to stop \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 15:57:51.519454 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0-shm.mount: Deactivated successfully. Jan 29 15:57:51.526844 systemd[1]: cri-containerd-1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0.scope: Deactivated successfully. Jan 29 15:57:51.552068 containerd[1473]: time="2025-01-29T15:57:51.551980279Z" level=info msg="shim disconnected" id=1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0 namespace=k8s.io Jan 29 15:57:51.552068 containerd[1473]: time="2025-01-29T15:57:51.552040878Z" level=warning msg="cleaning up after shim disconnected" id=1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0 namespace=k8s.io Jan 29 15:57:51.552068 containerd[1473]: time="2025-01-29T15:57:51.552050157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 15:57:51.554212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0-rootfs.mount: Deactivated successfully. Jan 29 15:57:51.614348 containerd[1473]: time="2025-01-29T15:57:51.614201616Z" level=error msg="ExecSync for \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Jan 29 15:57:51.615326 kubelet[2572]: E0129 15:57:51.614962 2572 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812" cmd=["/usr/bin/check-status","-r"] Jan 29 15:57:51.616284 containerd[1473]: time="2025-01-29T15:57:51.615608339Z" level=error msg="ExecSync for \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Jan 29 15:57:51.616449 kubelet[2572]: E0129 15:57:51.616389 2572 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812" cmd=["/usr/bin/check-status","-r"] Jan 29 15:57:51.616737 containerd[1473]: time="2025-01-29T15:57:51.616702789Z" level=error msg="ExecSync for \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state" Jan 29 15:57:51.616880 kubelet[2572]: E0129 15:57:51.616849 2572 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812" cmd=["/usr/bin/check-status","-r"] Jan 29 15:57:51.679652 systemd-networkd[1419]: calicabd2aa4537: Link DOWN Jan 29 15:57:51.679659 systemd-networkd[1419]: calicabd2aa4537: Lost carrier Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.677 [INFO][6174] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.677 [INFO][6174] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" iface="eth0" netns="/var/run/netns/cni-acecd5e7-af9d-3e4f-6c56-91a922302264" Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.678 [INFO][6174] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" iface="eth0" netns="/var/run/netns/cni-acecd5e7-af9d-3e4f-6c56-91a922302264" Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.690 [INFO][6174] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" after=12.818098ms iface="eth0" netns="/var/run/netns/cni-acecd5e7-af9d-3e4f-6c56-91a922302264" Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.691 [INFO][6174] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.691 [INFO][6174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.720 [INFO][6188] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" HandleID="k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Workload="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.720 [INFO][6188] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.720 [INFO][6188] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.765 [INFO][6188] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" HandleID="k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Workload="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.765 [INFO][6188] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" HandleID="k8s-pod-network.1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Workload="localhost-k8s-calico--kube--controllers--596fc4546b--ddndj-eth0" Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.768 [INFO][6188] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 15:57:51.776812 containerd[1473]: 2025-01-29 15:57:51.774 [INFO][6174] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0" Jan 29 15:57:51.778382 systemd[1]: run-netns-cni\x2dacecd5e7\x2daf9d\x2d3e4f\x2d6c56\x2d91a922302264.mount: Deactivated successfully. Jan 29 15:57:51.779427 containerd[1473]: time="2025-01-29T15:57:51.779363162Z" level=info msg="TearDown network for sandbox \"1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0\" successfully" Jan 29 15:57:51.779427 containerd[1473]: time="2025-01-29T15:57:51.779412481Z" level=info msg="StopPodSandbox for \"1a9f06b3b153ca36cd3b68b1f2be6dede27f782f2781427777704acbb25f76b0\" returns successfully" Jan 29 15:57:51.914844 kubelet[2572]: I0129 15:57:51.914734 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cf572\" (UniqueName: \"kubernetes.io/projected/6b44de13-eff2-42e8-844c-3aa53fc7af03-kube-api-access-cf572\") pod \"6b44de13-eff2-42e8-844c-3aa53fc7af03\" (UID: \"6b44de13-eff2-42e8-844c-3aa53fc7af03\") " Jan 29 15:57:51.914844 kubelet[2572]: I0129 15:57:51.914787 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b44de13-eff2-42e8-844c-3aa53fc7af03-tigera-ca-bundle\") pod \"6b44de13-eff2-42e8-844c-3aa53fc7af03\" (UID: \"6b44de13-eff2-42e8-844c-3aa53fc7af03\") " Jan 29 15:57:51.919767 kubelet[2572]: I0129 15:57:51.919087 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b44de13-eff2-42e8-844c-3aa53fc7af03-kube-api-access-cf572" (OuterVolumeSpecName: "kube-api-access-cf572") pod "6b44de13-eff2-42e8-844c-3aa53fc7af03" (UID: "6b44de13-eff2-42e8-844c-3aa53fc7af03"). InnerVolumeSpecName "kube-api-access-cf572". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 15:57:51.921215 systemd[1]: var-lib-kubelet-pods-6b44de13\x2deff2\x2d42e8\x2d844c\x2d3aa53fc7af03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcf572.mount: Deactivated successfully. Jan 29 15:57:51.924705 kubelet[2572]: I0129 15:57:51.924667 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b44de13-eff2-42e8-844c-3aa53fc7af03-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "6b44de13-eff2-42e8-844c-3aa53fc7af03" (UID: "6b44de13-eff2-42e8-844c-3aa53fc7af03"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 15:57:51.932285 kubelet[2572]: I0129 15:57:51.932255 2572 scope.go:117] "RemoveContainer" containerID="92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812" Jan 29 15:57:51.934276 containerd[1473]: time="2025-01-29T15:57:51.934181144Z" level=info msg="RemoveContainer for \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\"" Jan 29 15:57:51.940119 systemd[1]: Removed slice kubepods-besteffort-pod6b44de13_eff2_42e8_844c_3aa53fc7af03.slice - libcontainer container kubepods-besteffort-pod6b44de13_eff2_42e8_844c_3aa53fc7af03.slice. Jan 29 15:57:51.942524 containerd[1473]: time="2025-01-29T15:57:51.942464843Z" level=info msg="RemoveContainer for \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\" returns successfully" Jan 29 15:57:51.945802 kubelet[2572]: I0129 15:57:51.945762 2572 scope.go:117] "RemoveContainer" containerID="92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812" Jan 29 15:57:51.946153 containerd[1473]: time="2025-01-29T15:57:51.946065266Z" level=error msg="ContainerStatus for \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\": not found" Jan 29 15:57:51.946255 kubelet[2572]: E0129 15:57:51.946225 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\": not found" containerID="92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812" Jan 29 15:57:51.946344 kubelet[2572]: I0129 15:57:51.946259 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812"} err="failed to get container status \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\": rpc error: code = NotFound desc = an error occurred when try to find container \"92fc1f77aa902c1f2ed1deac37d49f01c1018dae52be42105c42c1d189ea4812\": not found" Jan 29 15:57:51.977122 kubelet[2572]: I0129 15:57:51.977079 2572 memory_manager.go:355] "RemoveStaleState removing state" podUID="6b44de13-eff2-42e8-844c-3aa53fc7af03" containerName="calico-kube-controllers" Jan 29 15:57:51.986172 systemd[1]: Created slice kubepods-besteffort-podb1f525e0_f76d_45a1_acf2_9b4a7a874df8.slice - libcontainer container kubepods-besteffort-podb1f525e0_f76d_45a1_acf2_9b4a7a874df8.slice. Jan 29 15:57:52.015728 kubelet[2572]: I0129 15:57:52.015670 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cf572\" (UniqueName: \"kubernetes.io/projected/6b44de13-eff2-42e8-844c-3aa53fc7af03-kube-api-access-cf572\") on node \"localhost\" DevicePath \"\"" Jan 29 15:57:52.015728 kubelet[2572]: I0129 15:57:52.015705 2572 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b44de13-eff2-42e8-844c-3aa53fc7af03-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 29 15:57:52.116154 kubelet[2572]: I0129 15:57:52.116109 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1f525e0-f76d-45a1-acf2-9b4a7a874df8-tigera-ca-bundle\") pod \"calico-kube-controllers-76cc8cbd88-xnqz2\" (UID: \"b1f525e0-f76d-45a1-acf2-9b4a7a874df8\") " pod="calico-system/calico-kube-controllers-76cc8cbd88-xnqz2" Jan 29 15:57:52.116154 kubelet[2572]: I0129 15:57:52.116157 2572 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk8bl\" (UniqueName: \"kubernetes.io/projected/b1f525e0-f76d-45a1-acf2-9b4a7a874df8-kube-api-access-kk8bl\") pod \"calico-kube-controllers-76cc8cbd88-xnqz2\" (UID: \"b1f525e0-f76d-45a1-acf2-9b4a7a874df8\") " pod="calico-system/calico-kube-controllers-76cc8cbd88-xnqz2" Jan 29 15:57:52.290659 containerd[1473]: time="2025-01-29T15:57:52.289971036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76cc8cbd88-xnqz2,Uid:b1f525e0-f76d-45a1-acf2-9b4a7a874df8,Namespace:calico-system,Attempt:0,}" Jan 29 15:57:52.422328 systemd-networkd[1419]: cali738746e6baf: Link UP Jan 29 15:57:52.422764 systemd-networkd[1419]: cali738746e6baf: Gained carrier Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.338 [INFO][6200] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0 calico-kube-controllers-76cc8cbd88- calico-system b1f525e0-f76d-45a1-acf2-9b4a7a874df8 1338 0 2025-01-29 15:57:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76cc8cbd88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-76cc8cbd88-xnqz2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali738746e6baf [] []}} ContainerID="9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" Namespace="calico-system" Pod="calico-kube-controllers-76cc8cbd88-xnqz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.338 [INFO][6200] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" Namespace="calico-system" Pod="calico-kube-controllers-76cc8cbd88-xnqz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.374 [INFO][6214] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" HandleID="k8s-pod-network.9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" Workload="localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.385 [INFO][6214] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" HandleID="k8s-pod-network.9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" Workload="localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b4ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-76cc8cbd88-xnqz2", "timestamp":"2025-01-29 15:57:52.374339051 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.385 [INFO][6214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.385 [INFO][6214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.385 [INFO][6214] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.388 [INFO][6214] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" host="localhost" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.391 [INFO][6214] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.396 [INFO][6214] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.398 [INFO][6214] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.400 [INFO][6214] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.400 [INFO][6214] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" host="localhost" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.402 [INFO][6214] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90 Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.411 [INFO][6214] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" host="localhost" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.417 [INFO][6214] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" host="localhost" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.417 [INFO][6214] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" host="localhost" Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.417 [INFO][6214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 15:57:52.433342 containerd[1473]: 2025-01-29 15:57:52.417 [INFO][6214] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" HandleID="k8s-pod-network.9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" Workload="localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0" Jan 29 15:57:52.434195 containerd[1473]: 2025-01-29 15:57:52.419 [INFO][6200] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" Namespace="calico-system" Pod="calico-kube-controllers-76cc8cbd88-xnqz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0", GenerateName:"calico-kube-controllers-76cc8cbd88-", Namespace:"calico-system", SelfLink:"", UID:"b1f525e0-f76d-45a1-acf2-9b4a7a874df8", ResourceVersion:"1338", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76cc8cbd88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-76cc8cbd88-xnqz2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali738746e6baf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:52.434195 containerd[1473]: 2025-01-29 15:57:52.419 [INFO][6200] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" Namespace="calico-system" Pod="calico-kube-controllers-76cc8cbd88-xnqz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0" Jan 29 15:57:52.434195 containerd[1473]: 2025-01-29 15:57:52.419 [INFO][6200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali738746e6baf ContainerID="9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" Namespace="calico-system" Pod="calico-kube-controllers-76cc8cbd88-xnqz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0" Jan 29 15:57:52.434195 containerd[1473]: 2025-01-29 15:57:52.422 [INFO][6200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" Namespace="calico-system" Pod="calico-kube-controllers-76cc8cbd88-xnqz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0" Jan 29 15:57:52.434195 containerd[1473]: 2025-01-29 15:57:52.422 [INFO][6200] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" Namespace="calico-system" Pod="calico-kube-controllers-76cc8cbd88-xnqz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0", GenerateName:"calico-kube-controllers-76cc8cbd88-", Namespace:"calico-system", SelfLink:"", UID:"b1f525e0-f76d-45a1-acf2-9b4a7a874df8", ResourceVersion:"1338", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 15, 57, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76cc8cbd88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90", Pod:"calico-kube-controllers-76cc8cbd88-xnqz2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali738746e6baf", MAC:"fe:a2:90:fd:da:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 15:57:52.434195 containerd[1473]: 2025-01-29 15:57:52.430 [INFO][6200] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90" Namespace="calico-system" Pod="calico-kube-controllers-76cc8cbd88-xnqz2" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76cc8cbd88--xnqz2-eth0" Jan 29 15:57:52.453116 containerd[1473]: time="2025-01-29T15:57:52.452878738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 15:57:52.453116 containerd[1473]: time="2025-01-29T15:57:52.452935776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 15:57:52.453116 containerd[1473]: time="2025-01-29T15:57:52.452950416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:52.453116 containerd[1473]: time="2025-01-29T15:57:52.453031494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 15:57:52.475760 systemd[1]: Started cri-containerd-9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90.scope - libcontainer container 9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90. Jan 29 15:57:52.481859 systemd[1]: var-lib-kubelet-pods-6b44de13\x2deff2\x2d42e8\x2d844c\x2d3aa53fc7af03-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 29 15:57:52.491750 systemd-resolved[1342]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 15:57:52.529816 containerd[1473]: time="2025-01-29T15:57:52.529770347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76cc8cbd88-xnqz2,Uid:b1f525e0-f76d-45a1-acf2-9b4a7a874df8,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90\"" Jan 29 15:57:52.547570 containerd[1473]: time="2025-01-29T15:57:52.547454009Z" level=info msg="CreateContainer within sandbox \"9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 15:57:52.560801 containerd[1473]: time="2025-01-29T15:57:52.560647427Z" level=info msg="CreateContainer within sandbox \"9ffdff0881886b2cd36fb673d7a7c7889e00c67388592044c9add946a71a3c90\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c78e80daeb9d0e5e05000fc49e84bfed8d06ce6f7021a6728c39c175be81f550\"" Jan 29 15:57:52.561399 containerd[1473]: time="2025-01-29T15:57:52.561353129Z" level=info msg="StartContainer for \"c78e80daeb9d0e5e05000fc49e84bfed8d06ce6f7021a6728c39c175be81f550\"" Jan 29 15:57:52.589777 systemd[1]: Started cri-containerd-c78e80daeb9d0e5e05000fc49e84bfed8d06ce6f7021a6728c39c175be81f550.scope - libcontainer container c78e80daeb9d0e5e05000fc49e84bfed8d06ce6f7021a6728c39c175be81f550. Jan 29 15:57:52.623156 containerd[1473]: time="2025-01-29T15:57:52.622934495Z" level=info msg="StartContainer for \"c78e80daeb9d0e5e05000fc49e84bfed8d06ce6f7021a6728c39c175be81f550\" returns successfully" Jan 29 15:57:52.866776 systemd[1]: Started sshd@20-10.0.0.7:22-10.0.0.1:36422.service - OpenSSH per-connection server daemon (10.0.0.1:36422). Jan 29 15:57:52.925963 sshd[6322]: Accepted publickey for core from 10.0.0.1 port 36422 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:52.928198 sshd-session[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:52.933637 systemd-logind[1458]: New session 21 of user core. Jan 29 15:57:52.939802 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 15:57:52.961275 kubelet[2572]: I0129 15:57:52.959904 2572 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76cc8cbd88-xnqz2" podStartSLOduration=1.959881651 podStartE2EDuration="1.959881651s" podCreationTimestamp="2025-01-29 15:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 15:57:52.959422342 +0000 UTC m=+67.603926562" watchObservedRunningTime="2025-01-29 15:57:52.959881651 +0000 UTC m=+67.604385911" Jan 29 15:57:53.177129 sshd[6331]: Connection closed by 10.0.0.1 port 36422 Jan 29 15:57:53.176970 sshd-session[6322]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:53.181315 systemd[1]: sshd@20-10.0.0.7:22-10.0.0.1:36422.service: Deactivated successfully. Jan 29 15:57:53.184629 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 15:57:53.185574 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Jan 29 15:57:53.188300 systemd-logind[1458]: Removed session 21. Jan 29 15:57:53.435829 kubelet[2572]: I0129 15:57:53.435719 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b44de13-eff2-42e8-844c-3aa53fc7af03" path="/var/lib/kubelet/pods/6b44de13-eff2-42e8-844c-3aa53fc7af03/volumes" Jan 29 15:57:53.535746 systemd-networkd[1419]: cali738746e6baf: Gained IPv6LL Jan 29 15:57:55.325466 systemd[1]: cri-containerd-6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68.scope: Deactivated successfully. Jan 29 15:57:55.326190 systemd[1]: cri-containerd-6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68.scope: Consumed 349ms CPU time, 30.1M memory peak, 8.6M read from disk. Jan 29 15:57:55.343920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68-rootfs.mount: Deactivated successfully. Jan 29 15:57:55.345168 containerd[1473]: time="2025-01-29T15:57:55.345075904Z" level=info msg="shim disconnected" id=6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68 namespace=k8s.io Jan 29 15:57:55.345650 containerd[1473]: time="2025-01-29T15:57:55.345467294Z" level=warning msg="cleaning up after shim disconnected" id=6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68 namespace=k8s.io Jan 29 15:57:55.345650 containerd[1473]: time="2025-01-29T15:57:55.345489054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 15:57:55.370247 containerd[1473]: time="2025-01-29T15:57:55.370120954Z" level=info msg="StopContainer for \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\" returns successfully" Jan 29 15:57:55.370663 containerd[1473]: time="2025-01-29T15:57:55.370617182Z" level=info msg="StopPodSandbox for \"0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655\"" Jan 29 15:57:55.370663 containerd[1473]: time="2025-01-29T15:57:55.370651822Z" level=info msg="Container to stop \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 15:57:55.372705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655-shm.mount: Deactivated successfully. Jan 29 15:57:55.377512 systemd[1]: cri-containerd-0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655.scope: Deactivated successfully. Jan 29 15:57:55.395971 containerd[1473]: time="2025-01-29T15:57:55.395897547Z" level=info msg="shim disconnected" id=0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655 namespace=k8s.io Jan 29 15:57:55.395971 containerd[1473]: time="2025-01-29T15:57:55.395957186Z" level=warning msg="cleaning up after shim disconnected" id=0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655 namespace=k8s.io Jan 29 15:57:55.395971 containerd[1473]: time="2025-01-29T15:57:55.395967706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 15:57:55.397803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655-rootfs.mount: Deactivated successfully. Jan 29 15:57:55.412534 containerd[1473]: time="2025-01-29T15:57:55.412482437Z" level=info msg="TearDown network for sandbox \"0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655\" successfully" Jan 29 15:57:55.412534 containerd[1473]: time="2025-01-29T15:57:55.412520276Z" level=info msg="StopPodSandbox for \"0e40509678f8c63f6710e45038e50afaa7c2718ab2e24c140ac73a5996d06655\" returns successfully" Jan 29 15:57:55.541676 kubelet[2572]: I0129 15:57:55.541639 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05d7f57c-f89f-4df9-9825-c3b4adfbd250-tigera-ca-bundle\") pod \"05d7f57c-f89f-4df9-9825-c3b4adfbd250\" (UID: \"05d7f57c-f89f-4df9-9825-c3b4adfbd250\") " Jan 29 15:57:55.541676 kubelet[2572]: I0129 15:57:55.541684 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwhws\" (UniqueName: \"kubernetes.io/projected/05d7f57c-f89f-4df9-9825-c3b4adfbd250-kube-api-access-pwhws\") pod \"05d7f57c-f89f-4df9-9825-c3b4adfbd250\" (UID: \"05d7f57c-f89f-4df9-9825-c3b4adfbd250\") " Jan 29 15:57:55.542048 kubelet[2572]: I0129 15:57:55.541704 2572 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/05d7f57c-f89f-4df9-9825-c3b4adfbd250-typha-certs\") pod \"05d7f57c-f89f-4df9-9825-c3b4adfbd250\" (UID: \"05d7f57c-f89f-4df9-9825-c3b4adfbd250\") " Jan 29 15:57:55.546054 systemd[1]: var-lib-kubelet-pods-05d7f57c\x2df89f\x2d4df9\x2d9825\x2dc3b4adfbd250-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpwhws.mount: Deactivated successfully. Jan 29 15:57:55.547013 kubelet[2572]: I0129 15:57:55.546955 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05d7f57c-f89f-4df9-9825-c3b4adfbd250-kube-api-access-pwhws" (OuterVolumeSpecName: "kube-api-access-pwhws") pod "05d7f57c-f89f-4df9-9825-c3b4adfbd250" (UID: "05d7f57c-f89f-4df9-9825-c3b4adfbd250"). InnerVolumeSpecName "kube-api-access-pwhws". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 29 15:57:55.549979 kubelet[2572]: I0129 15:57:55.549938 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05d7f57c-f89f-4df9-9825-c3b4adfbd250-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "05d7f57c-f89f-4df9-9825-c3b4adfbd250" (UID: "05d7f57c-f89f-4df9-9825-c3b4adfbd250"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 29 15:57:55.551537 kubelet[2572]: I0129 15:57:55.550941 2572 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05d7f57c-f89f-4df9-9825-c3b4adfbd250-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "05d7f57c-f89f-4df9-9825-c3b4adfbd250" (UID: "05d7f57c-f89f-4df9-9825-c3b4adfbd250"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 29 15:57:55.552161 systemd[1]: var-lib-kubelet-pods-05d7f57c\x2df89f\x2d4df9\x2d9825\x2dc3b4adfbd250-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 29 15:57:55.552268 systemd[1]: var-lib-kubelet-pods-05d7f57c\x2df89f\x2d4df9\x2d9825\x2dc3b4adfbd250-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 29 15:57:55.642312 kubelet[2572]: I0129 15:57:55.642198 2572 reconciler_common.go:299] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05d7f57c-f89f-4df9-9825-c3b4adfbd250-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 29 15:57:55.642312 kubelet[2572]: I0129 15:57:55.642230 2572 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pwhws\" (UniqueName: \"kubernetes.io/projected/05d7f57c-f89f-4df9-9825-c3b4adfbd250-kube-api-access-pwhws\") on node \"localhost\" DevicePath \"\"" Jan 29 15:57:55.642312 kubelet[2572]: I0129 15:57:55.642242 2572 reconciler_common.go:299] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/05d7f57c-f89f-4df9-9825-c3b4adfbd250-typha-certs\") on node \"localhost\" DevicePath \"\"" Jan 29 15:57:55.944064 kubelet[2572]: I0129 15:57:55.943966 2572 scope.go:117] "RemoveContainer" containerID="6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68" Jan 29 15:57:55.945855 containerd[1473]: time="2025-01-29T15:57:55.945633167Z" level=info msg="RemoveContainer for \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\"" Jan 29 15:57:55.949954 systemd[1]: Removed slice kubepods-besteffort-pod05d7f57c_f89f_4df9_9825_c3b4adfbd250.slice - libcontainer container kubepods-besteffort-pod05d7f57c_f89f_4df9_9825_c3b4adfbd250.slice. Jan 29 15:57:55.950047 systemd[1]: kubepods-besteffort-pod05d7f57c_f89f_4df9_9825_c3b4adfbd250.slice: Consumed 364ms CPU time, 30.3M memory peak, 8.6M read from disk. Jan 29 15:57:55.950509 containerd[1473]: time="2025-01-29T15:57:55.950402775Z" level=info msg="RemoveContainer for \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\" returns successfully" Jan 29 15:57:55.951340 kubelet[2572]: I0129 15:57:55.951284 2572 scope.go:117] "RemoveContainer" containerID="6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68" Jan 29 15:57:55.951849 containerd[1473]: time="2025-01-29T15:57:55.951779702Z" level=error msg="ContainerStatus for \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\": not found" Jan 29 15:57:55.952526 kubelet[2572]: E0129 15:57:55.952393 2572 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\": not found" containerID="6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68" Jan 29 15:57:55.952526 kubelet[2572]: I0129 15:57:55.952428 2572 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68"} err="failed to get container status \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\": rpc error: code = NotFound desc = an error occurred when try to find container \"6332360547e5b9d4e69ec6f8d23d468d632ea2f4358cb80a862ed364b692ac68\": not found" Jan 29 15:57:57.435720 kubelet[2572]: I0129 15:57:57.435683 2572 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05d7f57c-f89f-4df9-9825-c3b4adfbd250" path="/var/lib/kubelet/pods/05d7f57c-f89f-4df9-9825-c3b4adfbd250/volumes" Jan 29 15:57:58.187735 systemd[1]: Started sshd@21-10.0.0.7:22-10.0.0.1:36436.service - OpenSSH per-connection server daemon (10.0.0.1:36436). Jan 29 15:57:58.234067 sshd[6566]: Accepted publickey for core from 10.0.0.1 port 36436 ssh2: RSA SHA256:4mX/lzQU3D1dMBa7GZc3gSGUk2sKgMS88YYxAONzCDU Jan 29 15:57:58.235212 sshd-session[6566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 15:57:58.238863 systemd-logind[1458]: New session 22 of user core. Jan 29 15:57:58.253746 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 15:57:58.398022 sshd[6568]: Connection closed by 10.0.0.1 port 36436 Jan 29 15:57:58.398371 sshd-session[6566]: pam_unix(sshd:session): session closed for user core Jan 29 15:57:58.401624 systemd[1]: sshd@21-10.0.0.7:22-10.0.0.1:36436.service: Deactivated successfully. Jan 29 15:57:58.403369 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 15:57:58.404049 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Jan 29 15:57:58.404861 systemd-logind[1458]: Removed session 22.