Jan 29 11:12:06.921603 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:12:06.921626 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 11:12:06.921636 kernel: KASLR enabled Jan 29 11:12:06.921642 kernel: efi: EFI v2.7 by EDK II Jan 29 11:12:06.921647 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 29 11:12:06.921653 kernel: random: crng init done Jan 29 11:12:06.921660 kernel: secureboot: Secure boot disabled Jan 29 11:12:06.921666 kernel: ACPI: Early table checksum verification disabled Jan 29 11:12:06.921672 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 11:12:06.921679 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:12:06.921685 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:12:06.921691 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:12:06.921697 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:12:06.921703 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:12:06.921710 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:12:06.921718 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:12:06.921724 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:12:06.921730 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:12:06.921736 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:12:06.921742 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 11:12:06.921754 kernel: NUMA: Failed to initialise from firmware Jan 29 11:12:06.921762 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:12:06.921768 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 29 11:12:06.921774 kernel: Zone ranges: Jan 29 11:12:06.921780 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:12:06.921788 kernel: DMA32 empty Jan 29 11:12:06.921794 kernel: Normal empty Jan 29 11:12:06.921800 kernel: Movable zone start for each node Jan 29 11:12:06.921806 kernel: Early memory node ranges Jan 29 11:12:06.921813 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 11:12:06.921819 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 11:12:06.921825 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 11:12:06.921831 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 11:12:06.921857 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 11:12:06.921864 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 11:12:06.921870 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 11:12:06.921876 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:12:06.921885 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 11:12:06.921891 kernel: psci: probing for conduit method from ACPI. Jan 29 11:12:06.921897 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:12:06.921906 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:12:06.921913 kernel: psci: Trusted OS migration not required Jan 29 11:12:06.921919 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:12:06.921927 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:12:06.921934 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:12:06.921940 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:12:06.921947 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 11:12:06.921953 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:12:06.921960 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:12:06.921966 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:12:06.921973 kernel: CPU features: detected: Spectre-v4 Jan 29 11:12:06.921979 kernel: CPU features: detected: Spectre-BHB Jan 29 11:12:06.921986 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:12:06.921994 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:12:06.922000 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:12:06.922007 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:12:06.922014 kernel: alternatives: applying boot alternatives Jan 29 11:12:06.922021 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:12:06.922029 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:12:06.922035 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:12:06.922042 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:12:06.922048 kernel: Fallback order for Node 0: 0 Jan 29 11:12:06.922055 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 11:12:06.922061 kernel: Policy zone: DMA Jan 29 11:12:06.922069 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:12:06.922075 kernel: software IO TLB: area num 4. Jan 29 11:12:06.922082 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 11:12:06.922089 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Jan 29 11:12:06.922096 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:12:06.922102 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:12:06.922109 kernel: rcu: RCU event tracing is enabled. Jan 29 11:12:06.922116 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:12:06.922122 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:12:06.922129 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:12:06.922136 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:12:06.922142 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:12:06.922151 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:12:06.922158 kernel: GICv3: 256 SPIs implemented Jan 29 11:12:06.922164 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:12:06.922171 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:12:06.922177 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:12:06.922184 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:12:06.922190 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:12:06.922197 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:12:06.922204 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:12:06.922210 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 11:12:06.922217 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 11:12:06.922225 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:12:06.922231 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:12:06.922238 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:12:06.922245 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:12:06.922251 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:12:06.922258 kernel: arm-pv: using stolen time PV Jan 29 11:12:06.922265 kernel: Console: colour dummy device 80x25 Jan 29 11:12:06.922272 kernel: ACPI: Core revision 20230628 Jan 29 11:12:06.922279 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:12:06.922286 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:12:06.922294 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:12:06.922301 kernel: landlock: Up and running. Jan 29 11:12:06.922307 kernel: SELinux: Initializing. Jan 29 11:12:06.922314 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:12:06.922321 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:12:06.922328 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:12:06.922335 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:12:06.922342 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:12:06.922349 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:12:06.922357 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:12:06.922364 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:12:06.922370 kernel: Remapping and enabling EFI services. Jan 29 11:12:06.922377 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:12:06.922384 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:12:06.922390 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:12:06.922397 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 11:12:06.922461 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:12:06.922469 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:12:06.922476 kernel: Detected PIPT I-cache on CPU2 Jan 29 11:12:06.922486 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 11:12:06.922493 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 11:12:06.922505 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:12:06.922513 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 11:12:06.922520 kernel: Detected PIPT I-cache on CPU3 Jan 29 11:12:06.922527 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 11:12:06.922535 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 11:12:06.922542 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:12:06.922549 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 11:12:06.922557 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:12:06.922565 kernel: SMP: Total of 4 processors activated. Jan 29 11:12:06.922572 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:12:06.922579 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:12:06.922587 kernel: CPU features: detected: Common not Private translations Jan 29 11:12:06.922594 kernel: CPU features: detected: CRC32 instructions Jan 29 11:12:06.922601 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:12:06.922608 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:12:06.922616 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:12:06.922623 kernel: CPU features: detected: Privileged Access Never Jan 29 11:12:06.922631 kernel: CPU features: detected: RAS Extension Support Jan 29 11:12:06.922638 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:12:06.922645 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:12:06.922652 kernel: alternatives: applying system-wide alternatives Jan 29 11:12:06.922659 kernel: devtmpfs: initialized Jan 29 11:12:06.922667 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:12:06.922674 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:12:06.922682 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:12:06.922689 kernel: SMBIOS 3.0.0 present. Jan 29 11:12:06.922697 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 11:12:06.922704 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:12:06.922711 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:12:06.922718 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:12:06.922725 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:12:06.922733 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:12:06.922740 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 29 11:12:06.922753 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:12:06.922761 kernel: cpuidle: using governor menu Jan 29 11:12:06.922768 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:12:06.922775 kernel: ASID allocator initialised with 32768 entries Jan 29 11:12:06.922782 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:12:06.922789 kernel: Serial: AMBA PL011 UART driver Jan 29 11:12:06.922796 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:12:06.922804 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:12:06.922811 kernel: Modules: 508960 pages in range for PLT usage Jan 29 11:12:06.922820 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:12:06.922827 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:12:06.922834 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:12:06.922842 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:12:06.922849 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:12:06.922856 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:12:06.922863 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:12:06.922870 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:12:06.922877 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:12:06.922886 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:12:06.922893 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:12:06.922900 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:12:06.922907 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:12:06.922914 kernel: ACPI: Interpreter enabled Jan 29 11:12:06.922921 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:12:06.922928 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:12:06.922935 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:12:06.922942 kernel: printk: console [ttyAMA0] enabled Jan 29 11:12:06.922951 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:12:06.923090 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:12:06.923161 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:12:06.923225 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:12:06.923287 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:12:06.923349 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:12:06.923358 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:12:06.923369 kernel: PCI host bridge to bus 0000:00 Jan 29 11:12:06.923459 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:12:06.923520 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:12:06.923576 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:12:06.923633 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:12:06.923715 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:12:06.923800 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:12:06.923872 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 11:12:06.923936 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 11:12:06.924000 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:12:06.924065 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:12:06.924149 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 11:12:06.924215 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 11:12:06.924274 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:12:06.924333 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:12:06.924391 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:12:06.924400 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:12:06.924427 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:12:06.924435 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:12:06.924442 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:12:06.924449 kernel: iommu: Default domain type: Translated Jan 29 11:12:06.924456 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:12:06.924466 kernel: efivars: Registered efivars operations Jan 29 11:12:06.924473 kernel: vgaarb: loaded Jan 29 11:12:06.924481 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:12:06.924488 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:12:06.924495 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:12:06.924503 kernel: pnp: PnP ACPI init Jan 29 11:12:06.924576 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:12:06.924587 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:12:06.924597 kernel: NET: Registered PF_INET protocol family Jan 29 11:12:06.924604 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:12:06.924611 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:12:06.924619 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:12:06.924626 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:12:06.924633 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:12:06.924641 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:12:06.924648 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:12:06.924655 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:12:06.924664 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:12:06.924671 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:12:06.924678 kernel: kvm [1]: HYP mode not available Jan 29 11:12:06.924685 kernel: Initialise system trusted keyrings Jan 29 11:12:06.924693 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:12:06.924700 kernel: Key type asymmetric registered Jan 29 11:12:06.924707 kernel: Asymmetric key parser 'x509' registered Jan 29 11:12:06.924714 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:12:06.924721 kernel: io scheduler mq-deadline registered Jan 29 11:12:06.924730 kernel: io scheduler kyber registered Jan 29 11:12:06.924737 kernel: io scheduler bfq registered Jan 29 11:12:06.924744 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:12:06.924758 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:12:06.924766 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:12:06.924835 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 11:12:06.924845 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:12:06.924852 kernel: thunder_xcv, ver 1.0 Jan 29 11:12:06.924860 kernel: thunder_bgx, ver 1.0 Jan 29 11:12:06.924869 kernel: nicpf, ver 1.0 Jan 29 11:12:06.924876 kernel: nicvf, ver 1.0 Jan 29 11:12:06.924949 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:12:06.925012 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:12:06 UTC (1738149126) Jan 29 11:12:06.925021 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:12:06.925029 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:12:06.925036 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:12:06.925043 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:12:06.925053 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:12:06.925060 kernel: Segment Routing with IPv6 Jan 29 11:12:06.925067 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:12:06.925075 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:12:06.925082 kernel: Key type dns_resolver registered Jan 29 11:12:06.925089 kernel: registered taskstats version 1 Jan 29 11:12:06.925096 kernel: Loading compiled-in X.509 certificates Jan 29 11:12:06.925104 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 11:12:06.925111 kernel: Key type .fscrypt registered Jan 29 11:12:06.925119 kernel: Key type fscrypt-provisioning registered Jan 29 11:12:06.925127 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:12:06.925134 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:12:06.925141 kernel: ima: No architecture policies found Jan 29 11:12:06.925148 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:12:06.925155 kernel: clk: Disabling unused clocks Jan 29 11:12:06.925163 kernel: Freeing unused kernel memory: 39680K Jan 29 11:12:06.925170 kernel: Run /init as init process Jan 29 11:12:06.925177 kernel: with arguments: Jan 29 11:12:06.925186 kernel: /init Jan 29 11:12:06.925193 kernel: with environment: Jan 29 11:12:06.925200 kernel: HOME=/ Jan 29 11:12:06.925207 kernel: TERM=linux Jan 29 11:12:06.925214 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:12:06.925224 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:12:06.925233 systemd[1]: Detected virtualization kvm. Jan 29 11:12:06.925241 systemd[1]: Detected architecture arm64. Jan 29 11:12:06.925249 systemd[1]: Running in initrd. Jan 29 11:12:06.925257 systemd[1]: No hostname configured, using default hostname. Jan 29 11:12:06.925264 systemd[1]: Hostname set to . Jan 29 11:12:06.925272 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:12:06.925280 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:12:06.925287 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:12:06.925295 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:12:06.925303 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:12:06.925313 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:12:06.925321 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:12:06.925328 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:12:06.925338 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:12:06.925345 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:12:06.925353 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:12:06.925361 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:12:06.925370 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:12:06.925378 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:12:06.925386 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:12:06.925393 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:12:06.925401 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:12:06.925420 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:12:06.925428 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:12:06.925436 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:12:06.925446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:12:06.925454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:12:06.925462 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:12:06.925469 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:12:06.925477 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:12:06.925485 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:12:06.925492 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:12:06.925500 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:12:06.925508 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:12:06.925517 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:12:06.925525 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:12:06.925533 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:12:06.925541 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:12:06.925548 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:12:06.925557 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:12:06.925567 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:12:06.925593 systemd-journald[239]: Collecting audit messages is disabled. Jan 29 11:12:06.925614 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:12:06.925623 systemd-journald[239]: Journal started Jan 29 11:12:06.925641 systemd-journald[239]: Runtime Journal (/run/log/journal/b6fa76d5d2c5431c89cdf6cd06fd3c92) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:12:06.936596 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:12:06.936641 kernel: Bridge firewalling registered Jan 29 11:12:06.916183 systemd-modules-load[240]: Inserted module 'overlay' Jan 29 11:12:06.931986 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 29 11:12:06.939543 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:12:06.941481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:12:06.942891 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:12:06.944177 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:12:06.948055 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:12:06.950051 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:12:06.952465 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:12:06.960330 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:12:06.962170 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:12:06.963337 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:12:06.973538 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:12:06.975424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:12:06.985143 dracut-cmdline[278]: dracut-dracut-053 Jan 29 11:12:06.987702 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:12:07.002401 systemd-resolved[281]: Positive Trust Anchors: Jan 29 11:12:07.002504 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:12:07.002536 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:12:07.007422 systemd-resolved[281]: Defaulting to hostname 'linux'. Jan 29 11:12:07.010366 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:12:07.011261 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:12:07.060417 kernel: SCSI subsystem initialized Jan 29 11:12:07.063432 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:12:07.071445 kernel: iscsi: registered transport (tcp) Jan 29 11:12:07.084436 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:12:07.084466 kernel: QLogic iSCSI HBA Driver Jan 29 11:12:07.128496 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:12:07.142641 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:12:07.160073 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:12:07.160132 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:12:07.160163 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:12:07.207431 kernel: raid6: neonx8 gen() 15673 MB/s Jan 29 11:12:07.224422 kernel: raid6: neonx4 gen() 15619 MB/s Jan 29 11:12:07.241419 kernel: raid6: neonx2 gen() 13108 MB/s Jan 29 11:12:07.258417 kernel: raid6: neonx1 gen() 10464 MB/s Jan 29 11:12:07.275416 kernel: raid6: int64x8 gen() 6937 MB/s Jan 29 11:12:07.292421 kernel: raid6: int64x4 gen() 7328 MB/s Jan 29 11:12:07.309428 kernel: raid6: int64x2 gen() 6105 MB/s Jan 29 11:12:07.326434 kernel: raid6: int64x1 gen() 5053 MB/s Jan 29 11:12:07.326472 kernel: raid6: using algorithm neonx8 gen() 15673 MB/s Jan 29 11:12:07.343429 kernel: raid6: .... xor() 11924 MB/s, rmw enabled Jan 29 11:12:07.343445 kernel: raid6: using neon recovery algorithm Jan 29 11:12:07.348421 kernel: xor: measuring software checksum speed Jan 29 11:12:07.348436 kernel: 8regs : 19745 MB/sec Jan 29 11:12:07.349421 kernel: 32regs : 18114 MB/sec Jan 29 11:12:07.349432 kernel: arm64_neon : 27087 MB/sec Jan 29 11:12:07.349441 kernel: xor: using function: arm64_neon (27087 MB/sec) Jan 29 11:12:07.401441 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:12:07.412458 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:12:07.427594 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:12:07.438941 systemd-udevd[465]: Using default interface naming scheme 'v255'. Jan 29 11:12:07.442082 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:12:07.453668 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:12:07.465182 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Jan 29 11:12:07.493360 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:12:07.505649 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:12:07.546452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:12:07.553606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:12:07.563914 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:12:07.565239 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:12:07.567862 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:12:07.568768 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:12:07.579589 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:12:07.589660 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:12:07.593468 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 11:12:07.606633 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:12:07.606738 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:12:07.606756 kernel: GPT:9289727 != 19775487 Jan 29 11:12:07.606767 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:12:07.606776 kernel: GPT:9289727 != 19775487 Jan 29 11:12:07.606787 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:12:07.606797 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:12:07.605600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:12:07.605714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:12:07.610686 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:12:07.612352 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:12:07.612515 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:12:07.614379 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:12:07.622431 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (526) Jan 29 11:12:07.626042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:12:07.628640 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (511) Jan 29 11:12:07.638081 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:12:07.639253 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:12:07.644577 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:12:07.653356 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:12:07.654695 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:12:07.659642 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:12:07.670586 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:12:07.672681 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:12:07.676835 disk-uuid[553]: Primary Header is updated. Jan 29 11:12:07.676835 disk-uuid[553]: Secondary Entries is updated. Jan 29 11:12:07.676835 disk-uuid[553]: Secondary Header is updated. Jan 29 11:12:07.682510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:12:07.686441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:12:07.694212 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:12:08.688513 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:12:08.689986 disk-uuid[554]: The operation has completed successfully. Jan 29 11:12:08.711621 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:12:08.712574 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:12:08.731618 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:12:08.734465 sh[573]: Success Jan 29 11:12:08.748436 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:12:08.783827 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:12:08.785691 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:12:08.786605 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:12:08.796926 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 11:12:08.796974 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:12:08.796986 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:12:08.797693 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:12:08.798711 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:12:08.801964 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:12:08.803116 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:12:08.817560 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:12:08.818931 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:12:08.826758 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:12:08.826805 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:12:08.826816 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:12:08.828442 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:12:08.835950 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:12:08.837431 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:12:08.842632 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:12:08.849611 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:12:08.915161 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:12:08.929572 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:12:08.953852 ignition[662]: Ignition 2.20.0 Jan 29 11:12:08.953862 ignition[662]: Stage: fetch-offline Jan 29 11:12:08.953896 ignition[662]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:12:08.953904 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:12:08.954055 ignition[662]: parsed url from cmdline: "" Jan 29 11:12:08.954059 ignition[662]: no config URL provided Jan 29 11:12:08.954063 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:12:08.954070 ignition[662]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:12:08.954097 ignition[662]: op(1): [started] loading QEMU firmware config module Jan 29 11:12:08.954102 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:12:08.959680 systemd-networkd[766]: lo: Link UP Jan 29 11:12:08.959683 systemd-networkd[766]: lo: Gained carrier Jan 29 11:12:08.960445 systemd-networkd[766]: Enumeration completed Jan 29 11:12:08.960747 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:12:08.960886 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:12:08.960889 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:12:08.961623 systemd-networkd[766]: eth0: Link UP Jan 29 11:12:08.961626 systemd-networkd[766]: eth0: Gained carrier Jan 29 11:12:08.967380 ignition[662]: op(1): [finished] loading QEMU firmware config module Jan 29 11:12:08.961634 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:12:08.962137 systemd[1]: Reached target network.target - Network. Jan 29 11:12:08.987475 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:12:09.007372 ignition[662]: parsing config with SHA512: d3e3de1955427c72a30c8c8aafae896a07de91ff10159a48b30b54406dde777049192e0e5302820985a8e76d3a984adce917f249556c4d62394ed389cf3f7bea Jan 29 11:12:09.013764 unknown[662]: fetched base config from "system" Jan 29 11:12:09.013774 unknown[662]: fetched user config from "qemu" Jan 29 11:12:09.014235 ignition[662]: fetch-offline: fetch-offline passed Jan 29 11:12:09.016253 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:12:09.014315 ignition[662]: Ignition finished successfully Jan 29 11:12:09.017255 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:12:09.028601 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:12:09.039071 ignition[772]: Ignition 2.20.0 Jan 29 11:12:09.039082 ignition[772]: Stage: kargs Jan 29 11:12:09.039251 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:12:09.039261 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:12:09.040177 ignition[772]: kargs: kargs passed Jan 29 11:12:09.040225 ignition[772]: Ignition finished successfully Jan 29 11:12:09.042232 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:12:09.056666 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:12:09.066109 ignition[782]: Ignition 2.20.0 Jan 29 11:12:09.066118 ignition[782]: Stage: disks Jan 29 11:12:09.066282 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:12:09.066292 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:12:09.069302 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:12:09.067220 ignition[782]: disks: disks passed Jan 29 11:12:09.070257 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:12:09.067267 ignition[782]: Ignition finished successfully Jan 29 11:12:09.071385 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:12:09.072617 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:12:09.073961 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:12:09.075102 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:12:09.090628 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:12:09.099430 systemd-resolved[281]: Detected conflict on linux IN A 10.0.0.115 Jan 29 11:12:09.099445 systemd-resolved[281]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Jan 29 11:12:09.101725 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:12:09.105910 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:12:09.107857 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:12:09.151424 kernel: EXT4-fs (vda9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 11:12:09.152146 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:12:09.153189 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:12:09.162496 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:12:09.164333 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:12:09.165148 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:12:09.165184 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:12:09.165203 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:12:09.170076 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:12:09.171550 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:12:09.175204 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Jan 29 11:12:09.175236 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:12:09.176019 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:12:09.176056 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:12:09.180417 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:12:09.183244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:12:09.217190 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:12:09.221043 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:12:09.225476 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:12:09.229570 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:12:09.308240 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:12:09.322567 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:12:09.324034 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:12:09.328427 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:12:09.347289 ignition[914]: INFO : Ignition 2.20.0 Jan 29 11:12:09.347289 ignition[914]: INFO : Stage: mount Jan 29 11:12:09.348745 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:12:09.348745 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:12:09.348477 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:12:09.352199 ignition[914]: INFO : mount: mount passed Jan 29 11:12:09.352199 ignition[914]: INFO : Ignition finished successfully Jan 29 11:12:09.350612 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:12:09.360585 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:12:09.796386 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:12:09.811609 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:12:09.817853 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Jan 29 11:12:09.817892 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:12:09.818562 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:12:09.818584 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:12:09.821428 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:12:09.822016 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:12:09.838075 ignition[944]: INFO : Ignition 2.20.0 Jan 29 11:12:09.838075 ignition[944]: INFO : Stage: files Jan 29 11:12:09.839531 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:12:09.839531 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:12:09.839531 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:12:09.842685 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:12:09.842685 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:12:09.842685 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:12:09.842685 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:12:09.842685 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:12:09.842151 unknown[944]: wrote ssh authorized keys file for user: core Jan 29 11:12:09.849106 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:12:09.849106 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 11:12:09.849106 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:12:09.849106 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 11:12:09.897526 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:12:10.032656 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 11:12:10.032656 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:12:10.036255 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 11:12:10.359683 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:12:10.527418 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:12:10.527418 ignition[944]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 29 11:12:10.530126 ignition[944]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:12:10.551935 ignition[944]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:12:10.556153 ignition[944]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:12:10.558172 ignition[944]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:12:10.558172 ignition[944]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:12:10.558172 ignition[944]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:12:10.558172 ignition[944]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:12:10.558172 ignition[944]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:12:10.558172 ignition[944]: INFO : files: files passed Jan 29 11:12:10.558172 ignition[944]: INFO : Ignition finished successfully Jan 29 11:12:10.559880 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:12:10.568635 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:12:10.570951 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:12:10.572807 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:12:10.572901 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:12:10.578128 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:12:10.581287 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:12:10.581287 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:12:10.583785 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:12:10.585981 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:12:10.587084 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:12:10.595623 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:12:10.617352 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:12:10.617488 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:12:10.619153 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:12:10.620471 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:12:10.621827 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:12:10.622640 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:12:10.638910 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:12:10.649585 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:12:10.658544 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:12:10.659475 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:12:10.661223 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:12:10.662621 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:12:10.662743 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:12:10.664542 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:12:10.666036 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:12:10.667251 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:12:10.668481 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:12:10.670022 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:12:10.671445 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:12:10.673175 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:12:10.674578 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:12:10.675992 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:12:10.677221 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:12:10.678427 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:12:10.678551 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:12:10.680258 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:12:10.681682 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:12:10.683122 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:12:10.687487 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:12:10.689313 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:12:10.689465 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:12:10.691495 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:12:10.691609 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:12:10.693183 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:12:10.694346 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:12:10.698501 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:12:10.699510 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:12:10.701098 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:12:10.702276 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:12:10.702371 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:12:10.703541 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:12:10.703622 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:12:10.704729 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:12:10.704847 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:12:10.706206 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:12:10.706306 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:12:10.722600 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:12:10.724113 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:12:10.724789 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:12:10.724907 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:12:10.726557 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:12:10.726699 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:12:10.732933 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:12:10.733026 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:12:10.738846 ignition[1000]: INFO : Ignition 2.20.0 Jan 29 11:12:10.738846 ignition[1000]: INFO : Stage: umount Jan 29 11:12:10.738846 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:12:10.738846 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:12:10.738846 ignition[1000]: INFO : umount: umount passed Jan 29 11:12:10.738846 ignition[1000]: INFO : Ignition finished successfully Jan 29 11:12:10.737510 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:12:10.741022 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:12:10.741124 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:12:10.742883 systemd[1]: Stopped target network.target - Network. Jan 29 11:12:10.743821 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:12:10.743880 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:12:10.745282 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:12:10.745323 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:12:10.747214 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:12:10.747271 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:12:10.748572 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:12:10.748612 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:12:10.750211 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:12:10.752486 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:12:10.762517 systemd-networkd[766]: eth0: DHCPv6 lease lost Jan 29 11:12:10.764112 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:12:10.764265 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:12:10.766096 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:12:10.766233 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:12:10.768621 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:12:10.768664 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:12:10.776552 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:12:10.777229 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:12:10.777291 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:12:10.778737 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:12:10.778784 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:12:10.780128 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:12:10.780171 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:12:10.783193 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:12:10.783231 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:12:10.788898 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:12:10.807784 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:12:10.807955 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:12:10.809873 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:12:10.809987 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:12:10.811236 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:12:10.812435 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:12:10.814494 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:12:10.814567 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:12:10.816092 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:12:10.816121 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:12:10.817395 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:12:10.817468 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:12:10.819505 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:12:10.819547 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:12:10.821414 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:12:10.821457 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:12:10.823502 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:12:10.823570 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:12:10.832585 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:12:10.833627 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:12:10.833696 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:12:10.835243 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:12:10.835284 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:12:10.836725 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:12:10.836775 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:12:10.838310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:12:10.838346 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:12:10.843162 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:12:10.844484 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:12:10.845872 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:12:10.848018 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:12:10.859132 systemd[1]: Switching root. Jan 29 11:12:10.879066 systemd-journald[239]: Journal stopped Jan 29 11:12:11.603712 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 29 11:12:11.603778 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:12:11.603791 kernel: SELinux: policy capability open_perms=1 Jan 29 11:12:11.603800 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:12:11.603811 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:12:11.603821 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:12:11.603830 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:12:11.603844 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:12:11.603854 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:12:11.603864 kernel: audit: type=1403 audit(1738149131.066:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:12:11.603874 systemd[1]: Successfully loaded SELinux policy in 29.677ms. Jan 29 11:12:11.603894 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.039ms. Jan 29 11:12:11.603905 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:12:11.603916 systemd[1]: Detected virtualization kvm. Jan 29 11:12:11.603926 systemd[1]: Detected architecture arm64. Jan 29 11:12:11.603936 systemd[1]: Detected first boot. Jan 29 11:12:11.603949 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:12:11.603960 zram_generator::config[1064]: No configuration found. Jan 29 11:12:11.603972 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:12:11.603982 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:12:11.603992 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:12:11.604003 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:12:11.604014 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:12:11.604024 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:12:11.604036 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:12:11.604047 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:12:11.604057 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:12:11.604068 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:12:11.604077 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:12:11.604088 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:12:11.604098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:12:11.604109 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:12:11.604119 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:12:11.604131 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:12:11.604141 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:12:11.604151 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:12:11.604161 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:12:11.604171 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:12:11.604181 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:12:11.604191 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:12:11.604201 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:12:11.604213 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:12:11.604223 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:12:11.604233 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:12:11.604243 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:12:11.604254 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:12:11.604265 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:12:11.604275 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:12:11.604285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:12:11.604295 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:12:11.604307 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:12:11.604317 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:12:11.604328 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:12:11.604338 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:12:11.604348 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:12:11.604358 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:12:11.604369 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:12:11.604379 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:12:11.604389 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:12:11.604401 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:12:11.604428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:12:11.604439 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:12:11.604449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:12:11.604459 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:12:11.604469 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:12:11.604479 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:12:11.604490 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 11:12:11.604503 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 11:12:11.604513 kernel: fuse: init (API version 7.39) Jan 29 11:12:11.604522 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:12:11.604532 kernel: loop: module loaded Jan 29 11:12:11.604542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:12:11.604552 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:12:11.604562 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:12:11.604573 kernel: ACPI: bus type drm_connector registered Jan 29 11:12:11.604583 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:12:11.604611 systemd-journald[1142]: Collecting audit messages is disabled. Jan 29 11:12:11.604638 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:12:11.604649 systemd-journald[1142]: Journal started Jan 29 11:12:11.604670 systemd-journald[1142]: Runtime Journal (/run/log/journal/b6fa76d5d2c5431c89cdf6cd06fd3c92) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:12:11.608890 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:12:11.607840 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:12:11.609164 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:12:11.610093 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:12:11.611250 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:12:11.612852 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:12:11.614059 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:12:11.615295 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:12:11.615493 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:12:11.616733 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:12:11.617879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:12:11.618023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:12:11.619338 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:12:11.619768 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:12:11.620784 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:12:11.620943 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:12:11.622087 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:12:11.622242 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:12:11.623348 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:12:11.623831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:12:11.625025 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:12:11.626577 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:12:11.627837 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:12:11.639150 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:12:11.645537 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:12:11.647777 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:12:11.648851 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:12:11.653399 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:12:11.655585 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:12:11.658385 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:12:11.659614 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:12:11.660568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:12:11.664246 systemd-journald[1142]: Time spent on flushing to /var/log/journal/b6fa76d5d2c5431c89cdf6cd06fd3c92 is 19.852ms for 846 entries. Jan 29 11:12:11.664246 systemd-journald[1142]: System Journal (/var/log/journal/b6fa76d5d2c5431c89cdf6cd06fd3c92) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:12:11.688893 systemd-journald[1142]: Received client request to flush runtime journal. Jan 29 11:12:11.664595 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:12:11.667515 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:12:11.669940 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:12:11.671043 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:12:11.671976 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:12:11.673209 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:12:11.676125 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:12:11.679380 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:12:11.689906 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:12:11.691305 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:12:11.696493 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:12:11.698888 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jan 29 11:12:11.698906 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jan 29 11:12:11.703101 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:12:11.712662 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:12:11.729750 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:12:11.746568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:12:11.757876 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jan 29 11:12:11.757894 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jan 29 11:12:11.761366 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:12:12.072706 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:12:12.083642 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:12:12.108538 systemd-udevd[1225]: Using default interface naming scheme 'v255'. Jan 29 11:12:12.122939 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:12:12.143593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:12:12.150858 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 29 11:12:12.167427 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1238) Jan 29 11:12:12.168612 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:12:12.215235 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:12:12.216314 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:12:12.250608 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:12:12.261327 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:12:12.269547 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:12:12.271798 systemd-networkd[1231]: lo: Link UP Jan 29 11:12:12.271808 systemd-networkd[1231]: lo: Gained carrier Jan 29 11:12:12.272521 systemd-networkd[1231]: Enumeration completed Jan 29 11:12:12.272622 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:12:12.273459 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:12:12.273468 systemd-networkd[1231]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:12:12.273983 systemd-networkd[1231]: eth0: Link UP Jan 29 11:12:12.273992 systemd-networkd[1231]: eth0: Gained carrier Jan 29 11:12:12.274004 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:12:12.275276 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:12:12.287198 lvm[1263]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:12:12.296470 systemd-networkd[1231]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:12:12.297328 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:12:12.308317 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:12:12.310042 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:12:12.320583 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:12:12.323765 lvm[1271]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:12:12.350591 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:12:12.351704 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:12:12.352703 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:12:12.352736 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:12:12.353510 systemd[1]: Reached target machines.target - Containers. Jan 29 11:12:12.355151 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:12:12.370589 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:12:12.372685 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:12:12.373549 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:12:12.374534 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:12:12.376731 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:12:12.379650 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:12:12.381145 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:12:12.393539 kernel: loop0: detected capacity change from 0 to 116808 Jan 29 11:12:12.398444 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:12:12.403590 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:12:12.406127 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:12:12.406820 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:12:12.441432 kernel: loop1: detected capacity change from 0 to 194096 Jan 29 11:12:12.485445 kernel: loop2: detected capacity change from 0 to 113536 Jan 29 11:12:12.537432 kernel: loop3: detected capacity change from 0 to 116808 Jan 29 11:12:12.542459 kernel: loop4: detected capacity change from 0 to 194096 Jan 29 11:12:12.550431 kernel: loop5: detected capacity change from 0 to 113536 Jan 29 11:12:12.555542 (sd-merge)[1291]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:12:12.556068 (sd-merge)[1291]: Merged extensions into '/usr'. Jan 29 11:12:12.559252 systemd[1]: Reloading requested from client PID 1279 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:12:12.559272 systemd[1]: Reloading... Jan 29 11:12:12.604469 zram_generator::config[1320]: No configuration found. Jan 29 11:12:12.637975 ldconfig[1275]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:12:12.698722 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:12:12.740832 systemd[1]: Reloading finished in 181 ms. Jan 29 11:12:12.757114 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:12:12.758360 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:12:12.771543 systemd[1]: Starting ensure-sysext.service... Jan 29 11:12:12.773319 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:12:12.778307 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:12:12.778326 systemd[1]: Reloading... Jan 29 11:12:12.788544 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:12:12.788817 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:12:12.789433 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:12:12.789640 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 29 11:12:12.789695 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 29 11:12:12.792003 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:12:12.792018 systemd-tmpfiles[1361]: Skipping /boot Jan 29 11:12:12.798665 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:12:12.798681 systemd-tmpfiles[1361]: Skipping /boot Jan 29 11:12:12.823435 zram_generator::config[1390]: No configuration found. Jan 29 11:12:12.910420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:12:12.954023 systemd[1]: Reloading finished in 175 ms. Jan 29 11:12:12.970283 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:12:12.994499 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:12:12.996670 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:12:12.999183 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:12:13.003001 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:12:13.005635 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:12:13.011863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:12:13.015670 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:12:13.018658 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:12:13.021558 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:12:13.023493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:12:13.026894 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:12:13.027035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:12:13.031483 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:12:13.031661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:12:13.034635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:12:13.039754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:12:13.043468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:12:13.047554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:12:13.048664 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:12:13.050104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:12:13.050242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:12:13.051962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:12:13.057878 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:12:13.059977 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:12:13.062600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:12:13.064156 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:12:13.069102 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:12:13.073623 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:12:13.085278 augenrules[1480]: No rules Jan 29 11:12:13.088653 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:12:13.091535 systemd-resolved[1436]: Positive Trust Anchors: Jan 29 11:12:13.091592 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:12:13.091651 systemd-resolved[1436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:12:13.091684 systemd-resolved[1436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:12:13.093332 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:12:13.099584 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:12:13.100559 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:12:13.100948 systemd-resolved[1436]: Defaulting to hostname 'linux'. Jan 29 11:12:13.102098 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:12:13.103108 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:12:13.104035 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:12:13.105213 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:12:13.105518 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:12:13.106738 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:12:13.106889 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:12:13.108087 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:12:13.108216 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:12:13.109361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:12:13.109521 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:12:13.110852 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:12:13.111050 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:12:13.113157 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:12:13.116171 systemd[1]: Finished ensure-sysext.service. Jan 29 11:12:13.120803 systemd[1]: Reached target network.target - Network. Jan 29 11:12:13.121670 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:12:13.122503 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:12:13.122564 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:12:13.129548 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:12:13.170626 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:12:13.171790 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:12:13.171837 systemd-timesyncd[1504]: Initial clock synchronization to Wed 2025-01-29 11:12:12.979647 UTC. Jan 29 11:12:13.171933 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:12:13.172779 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:12:13.173669 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:12:13.174551 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:12:13.175430 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:12:13.175457 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:12:13.176089 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:12:13.176946 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:12:13.177843 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:12:13.178739 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:12:13.180058 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:12:13.182337 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:12:13.184137 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:12:13.194341 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:12:13.195184 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:12:13.195927 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:12:13.196731 systemd[1]: System is tainted: cgroupsv1 Jan 29 11:12:13.196785 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:12:13.196805 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:12:13.198035 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:12:13.199946 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:12:13.202550 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:12:13.205347 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:12:13.206326 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:12:13.208916 jq[1510]: false Jan 29 11:12:13.210544 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:12:13.215690 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:12:13.219470 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:12:13.223964 extend-filesystems[1512]: Found loop3 Jan 29 11:12:13.223964 extend-filesystems[1512]: Found loop4 Jan 29 11:12:13.223964 extend-filesystems[1512]: Found loop5 Jan 29 11:12:13.223964 extend-filesystems[1512]: Found vda Jan 29 11:12:13.223964 extend-filesystems[1512]: Found vda1 Jan 29 11:12:13.223964 extend-filesystems[1512]: Found vda2 Jan 29 11:12:13.223964 extend-filesystems[1512]: Found vda3 Jan 29 11:12:13.247027 extend-filesystems[1512]: Found usr Jan 29 11:12:13.247027 extend-filesystems[1512]: Found vda4 Jan 29 11:12:13.247027 extend-filesystems[1512]: Found vda6 Jan 29 11:12:13.247027 extend-filesystems[1512]: Found vda7 Jan 29 11:12:13.247027 extend-filesystems[1512]: Found vda9 Jan 29 11:12:13.247027 extend-filesystems[1512]: Checking size of /dev/vda9 Jan 29 11:12:13.247027 extend-filesystems[1512]: Resized partition /dev/vda9 Jan 29 11:12:13.259557 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1238) Jan 29 11:12:13.259603 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:12:13.224840 dbus-daemon[1509]: [system] SELinux support is enabled Jan 29 11:12:13.227520 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:12:13.259935 extend-filesystems[1535]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:12:13.233955 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:12:13.237340 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:12:13.238693 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:12:13.241553 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:12:13.263152 jq[1533]: true Jan 29 11:12:13.243188 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:12:13.250742 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:12:13.250983 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:12:13.251257 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:12:13.251464 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:12:13.257983 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:12:13.259512 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:12:13.277585 (ntainerd)[1547]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:12:13.283790 jq[1542]: true Jan 29 11:12:13.304438 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:12:13.304476 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:12:13.308182 tar[1540]: linux-arm64/helm Jan 29 11:12:13.305835 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:12:13.308513 update_engine[1532]: I20250129 11:12:13.307348 1532 main.cc:92] Flatcar Update Engine starting Jan 29 11:12:13.305851 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:12:13.314155 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:12:13.326320 update_engine[1532]: I20250129 11:12:13.314258 1532 update_check_scheduler.cc:74] Next update check in 10m15s Jan 29 11:12:13.314478 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:12:13.315944 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:12:13.322543 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:12:13.326921 extend-filesystems[1535]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:12:13.326921 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:12:13.326921 extend-filesystems[1535]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:12:13.334292 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Jan 29 11:12:13.328795 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:12:13.329693 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:12:13.329929 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:12:13.330313 systemd-logind[1528]: New seat seat0. Jan 29 11:12:13.335577 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:12:13.380895 bash[1570]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:12:13.384907 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:12:13.386849 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:12:13.391808 locksmithd[1563]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:12:13.442316 systemd-networkd[1231]: eth0: Gained IPv6LL Jan 29 11:12:13.449763 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:12:13.456239 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:12:13.473113 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:12:13.476608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:13.479738 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:12:13.506090 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:12:13.506325 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:12:13.509887 containerd[1547]: time="2025-01-29T11:12:13.509806760Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:12:13.513525 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:12:13.524147 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:12:13.545586 containerd[1547]: time="2025-01-29T11:12:13.545537000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:13.546952 containerd[1547]: time="2025-01-29T11:12:13.546912640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:12:13.546952 containerd[1547]: time="2025-01-29T11:12:13.546948640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:12:13.547044 containerd[1547]: time="2025-01-29T11:12:13.546967120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:12:13.547144 containerd[1547]: time="2025-01-29T11:12:13.547121840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:12:13.547176 containerd[1547]: time="2025-01-29T11:12:13.547144360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:13.547221 containerd[1547]: time="2025-01-29T11:12:13.547201600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:12:13.547221 containerd[1547]: time="2025-01-29T11:12:13.547219280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:13.547738 containerd[1547]: time="2025-01-29T11:12:13.547434160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:12:13.547738 containerd[1547]: time="2025-01-29T11:12:13.547453920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:13.547738 containerd[1547]: time="2025-01-29T11:12:13.547466440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:12:13.547738 containerd[1547]: time="2025-01-29T11:12:13.547475920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:13.547738 containerd[1547]: time="2025-01-29T11:12:13.547551240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:13.547738 containerd[1547]: time="2025-01-29T11:12:13.547729280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:12:13.547889 containerd[1547]: time="2025-01-29T11:12:13.547863440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:12:13.547889 containerd[1547]: time="2025-01-29T11:12:13.547878840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:12:13.547974 containerd[1547]: time="2025-01-29T11:12:13.547952800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:12:13.548018 containerd[1547]: time="2025-01-29T11:12:13.548003600Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:12:13.551629 containerd[1547]: time="2025-01-29T11:12:13.551600800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:12:13.551682 containerd[1547]: time="2025-01-29T11:12:13.551647880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:12:13.551682 containerd[1547]: time="2025-01-29T11:12:13.551663360Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:12:13.551682 containerd[1547]: time="2025-01-29T11:12:13.551680240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:12:13.551774 containerd[1547]: time="2025-01-29T11:12:13.551694280Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.551829880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552151360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552253080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552268240Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552282160Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552296360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552309080Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552324720Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552338640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552352200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552364960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552376400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552387680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:12:13.552529 containerd[1547]: time="2025-01-29T11:12:13.552440120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552456080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552467880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552480880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552492840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552506360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552517920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552531160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552543960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552558800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552572400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552587080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552599600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552614080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552634040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.552830 containerd[1547]: time="2025-01-29T11:12:13.552654640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.553084 containerd[1547]: time="2025-01-29T11:12:13.552665600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:12:13.553084 containerd[1547]: time="2025-01-29T11:12:13.552958600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:12:13.553084 containerd[1547]: time="2025-01-29T11:12:13.552978920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:12:13.553084 containerd[1547]: time="2025-01-29T11:12:13.552989040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:12:13.553084 containerd[1547]: time="2025-01-29T11:12:13.553000520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:12:13.553084 containerd[1547]: time="2025-01-29T11:12:13.553010000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.553084 containerd[1547]: time="2025-01-29T11:12:13.553021920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:12:13.553084 containerd[1547]: time="2025-01-29T11:12:13.553031200Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:12:13.553084 containerd[1547]: time="2025-01-29T11:12:13.553042160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:12:13.553528 containerd[1547]: time="2025-01-29T11:12:13.553443800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:12:13.553528 containerd[1547]: time="2025-01-29T11:12:13.553496040Z" level=info msg="Connect containerd service" Jan 29 11:12:13.554069 containerd[1547]: time="2025-01-29T11:12:13.553532600Z" level=info msg="using legacy CRI server" Jan 29 11:12:13.554069 containerd[1547]: time="2025-01-29T11:12:13.553539480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:12:13.554069 containerd[1547]: time="2025-01-29T11:12:13.553883480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:12:13.554670 containerd[1547]: time="2025-01-29T11:12:13.554639040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:12:13.555766 containerd[1547]: time="2025-01-29T11:12:13.555292600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:12:13.555766 containerd[1547]: time="2025-01-29T11:12:13.555366120Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:12:13.555766 containerd[1547]: time="2025-01-29T11:12:13.555454040Z" level=info msg="Start subscribing containerd event" Jan 29 11:12:13.555766 containerd[1547]: time="2025-01-29T11:12:13.555516360Z" level=info msg="Start recovering state" Jan 29 11:12:13.555766 containerd[1547]: time="2025-01-29T11:12:13.555582600Z" level=info msg="Start event monitor" Jan 29 11:12:13.555766 containerd[1547]: time="2025-01-29T11:12:13.555595120Z" level=info msg="Start snapshots syncer" Jan 29 11:12:13.555766 containerd[1547]: time="2025-01-29T11:12:13.555604120Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:12:13.555766 containerd[1547]: time="2025-01-29T11:12:13.555611200Z" level=info msg="Start streaming server" Jan 29 11:12:13.555950 containerd[1547]: time="2025-01-29T11:12:13.555807720Z" level=info msg="containerd successfully booted in 0.049013s" Jan 29 11:12:13.555928 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:12:13.691074 tar[1540]: linux-arm64/LICENSE Jan 29 11:12:13.691200 tar[1540]: linux-arm64/README.md Jan 29 11:12:13.714126 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:12:13.972598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:13.975678 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:12:14.247899 sshd_keygen[1541]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:12:14.267079 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:12:14.279764 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:12:14.286423 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:12:14.286694 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:12:14.290263 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:12:14.302846 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:12:14.306034 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:12:14.308342 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:12:14.309814 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:12:14.310865 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:12:14.311927 systemd[1]: Startup finished in 4.920s (kernel) + 3.277s (userspace) = 8.197s. Jan 29 11:12:14.492852 kubelet[1627]: E0129 11:12:14.492795 1627 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:12:14.495626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:12:14.495827 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:12:19.243733 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:12:19.255680 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:36794.service - OpenSSH per-connection server daemon (10.0.0.1:36794). Jan 29 11:12:19.322864 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 36794 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:12:19.324510 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:19.332947 systemd-logind[1528]: New session 1 of user core. Jan 29 11:12:19.333959 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:12:19.340716 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:12:19.350544 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:12:19.352843 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:12:19.360425 (systemd)[1666]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:12:19.440666 systemd[1666]: Queued start job for default target default.target. Jan 29 11:12:19.441051 systemd[1666]: Created slice app.slice - User Application Slice. Jan 29 11:12:19.441075 systemd[1666]: Reached target paths.target - Paths. Jan 29 11:12:19.441087 systemd[1666]: Reached target timers.target - Timers. Jan 29 11:12:19.455566 systemd[1666]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:12:19.461391 systemd[1666]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:12:19.461468 systemd[1666]: Reached target sockets.target - Sockets. Jan 29 11:12:19.461480 systemd[1666]: Reached target basic.target - Basic System. Jan 29 11:12:19.461516 systemd[1666]: Reached target default.target - Main User Target. Jan 29 11:12:19.461540 systemd[1666]: Startup finished in 95ms. Jan 29 11:12:19.461847 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:12:19.463576 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:12:19.525684 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:36796.service - OpenSSH per-connection server daemon (10.0.0.1:36796). Jan 29 11:12:19.563193 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 36796 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:12:19.564633 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:19.568464 systemd-logind[1528]: New session 2 of user core. Jan 29 11:12:19.579686 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:12:19.631439 sshd[1681]: Connection closed by 10.0.0.1 port 36796 Jan 29 11:12:19.631431 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:19.644672 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:36812.service - OpenSSH per-connection server daemon (10.0.0.1:36812). Jan 29 11:12:19.645060 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:36796.service: Deactivated successfully. Jan 29 11:12:19.646901 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:12:19.647426 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:12:19.648978 systemd-logind[1528]: Removed session 2. Jan 29 11:12:19.681044 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 36812 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:12:19.682362 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:19.686443 systemd-logind[1528]: New session 3 of user core. Jan 29 11:12:19.697680 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:12:19.744917 sshd[1689]: Connection closed by 10.0.0.1 port 36812 Jan 29 11:12:19.745616 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:19.756715 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:36828.service - OpenSSH per-connection server daemon (10.0.0.1:36828). Jan 29 11:12:19.757098 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:36812.service: Deactivated successfully. Jan 29 11:12:19.758837 systemd-logind[1528]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:12:19.759439 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:12:19.760801 systemd-logind[1528]: Removed session 3. Jan 29 11:12:19.792971 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 36828 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:12:19.794098 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:19.798131 systemd-logind[1528]: New session 4 of user core. Jan 29 11:12:19.808687 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:12:19.859424 sshd[1697]: Connection closed by 10.0.0.1 port 36828 Jan 29 11:12:19.859928 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:19.869718 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:36842.service - OpenSSH per-connection server daemon (10.0.0.1:36842). Jan 29 11:12:19.870123 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:36828.service: Deactivated successfully. Jan 29 11:12:19.871905 systemd-logind[1528]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:12:19.872422 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:12:19.873848 systemd-logind[1528]: Removed session 4. Jan 29 11:12:19.906008 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 36842 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:12:19.907252 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:19.911360 systemd-logind[1528]: New session 5 of user core. Jan 29 11:12:19.923681 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:12:19.987584 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:12:19.987878 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:12:20.001335 sudo[1706]: pam_unix(sudo:session): session closed for user root Jan 29 11:12:20.002748 sshd[1705]: Connection closed by 10.0.0.1 port 36842 Jan 29 11:12:20.003155 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:20.016787 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:36846.service - OpenSSH per-connection server daemon (10.0.0.1:36846). Jan 29 11:12:20.017261 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:36842.service: Deactivated successfully. Jan 29 11:12:20.019031 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:12:20.019536 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:12:20.020819 systemd-logind[1528]: Removed session 5. Jan 29 11:12:20.053736 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 36846 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:12:20.055290 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:20.059092 systemd-logind[1528]: New session 6 of user core. Jan 29 11:12:20.068706 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:12:20.119767 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:12:20.120059 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:12:20.123276 sudo[1716]: pam_unix(sudo:session): session closed for user root Jan 29 11:12:20.128088 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:12:20.128360 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:12:20.144891 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:12:20.168543 augenrules[1738]: No rules Jan 29 11:12:20.169795 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:12:20.170047 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:12:20.171046 sudo[1715]: pam_unix(sudo:session): session closed for user root Jan 29 11:12:20.172222 sshd[1714]: Connection closed by 10.0.0.1 port 36846 Jan 29 11:12:20.172594 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:20.183687 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:36850.service - OpenSSH per-connection server daemon (10.0.0.1:36850). Jan 29 11:12:20.184107 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:36846.service: Deactivated successfully. Jan 29 11:12:20.186738 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:12:20.187158 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:12:20.188283 systemd-logind[1528]: Removed session 6. Jan 29 11:12:20.222955 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 36850 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:12:20.224160 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:12:20.228001 systemd-logind[1528]: New session 7 of user core. Jan 29 11:12:20.237724 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:12:20.289688 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:12:20.289985 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:12:20.608731 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:12:20.608861 (dockerd)[1771]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:12:20.854387 dockerd[1771]: time="2025-01-29T11:12:20.854327388Z" level=info msg="Starting up" Jan 29 11:12:21.088462 dockerd[1771]: time="2025-01-29T11:12:21.088099103Z" level=info msg="Loading containers: start." Jan 29 11:12:21.231435 kernel: Initializing XFRM netlink socket Jan 29 11:12:21.293063 systemd-networkd[1231]: docker0: Link UP Jan 29 11:12:21.322679 dockerd[1771]: time="2025-01-29T11:12:21.322623466Z" level=info msg="Loading containers: done." Jan 29 11:12:21.338522 dockerd[1771]: time="2025-01-29T11:12:21.338153077Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:12:21.338522 dockerd[1771]: time="2025-01-29T11:12:21.338255542Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:12:21.338522 dockerd[1771]: time="2025-01-29T11:12:21.338362009Z" level=info msg="Daemon has completed initialization" Jan 29 11:12:21.364249 dockerd[1771]: time="2025-01-29T11:12:21.364178087Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:12:21.364760 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:12:21.920823 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck885700069-merged.mount: Deactivated successfully. Jan 29 11:12:22.017661 containerd[1547]: time="2025-01-29T11:12:22.017610039Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:12:22.850072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143732938.mount: Deactivated successfully. Jan 29 11:12:24.127683 containerd[1547]: time="2025-01-29T11:12:24.127623404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:24.128283 containerd[1547]: time="2025-01-29T11:12:24.128245768Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864937" Jan 29 11:12:24.128969 containerd[1547]: time="2025-01-29T11:12:24.128919526Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:24.131983 containerd[1547]: time="2025-01-29T11:12:24.131939175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:24.134177 containerd[1547]: time="2025-01-29T11:12:24.134127150Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.116467189s" Jan 29 11:12:24.134177 containerd[1547]: time="2025-01-29T11:12:24.134171706Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 11:12:24.154041 containerd[1547]: time="2025-01-29T11:12:24.154005017Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:12:24.745941 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:12:24.755672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:24.845576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:24.849584 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:12:24.892206 kubelet[2046]: E0129 11:12:24.892151 2046 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:12:24.895467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:12:24.895653 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:12:25.731982 containerd[1547]: time="2025-01-29T11:12:25.731929477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:25.732474 containerd[1547]: time="2025-01-29T11:12:25.732288840Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901563" Jan 29 11:12:25.733109 containerd[1547]: time="2025-01-29T11:12:25.733078731Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:25.736776 containerd[1547]: time="2025-01-29T11:12:25.736716995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:25.737849 containerd[1547]: time="2025-01-29T11:12:25.737816287Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.583770839s" Jan 29 11:12:25.737849 containerd[1547]: time="2025-01-29T11:12:25.737846877Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 11:12:25.754903 containerd[1547]: time="2025-01-29T11:12:25.754848884Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:12:27.146933 containerd[1547]: time="2025-01-29T11:12:27.146788661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:27.147798 containerd[1547]: time="2025-01-29T11:12:27.147492391Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164340" Jan 29 11:12:27.148528 containerd[1547]: time="2025-01-29T11:12:27.148470356Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:27.151377 containerd[1547]: time="2025-01-29T11:12:27.151333153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:27.152622 containerd[1547]: time="2025-01-29T11:12:27.152591646Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.397075893s" Jan 29 11:12:27.152674 containerd[1547]: time="2025-01-29T11:12:27.152624586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 11:12:27.171873 containerd[1547]: time="2025-01-29T11:12:27.171839786Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:12:28.394692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415619597.mount: Deactivated successfully. Jan 29 11:12:28.690304 containerd[1547]: time="2025-01-29T11:12:28.690181231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:28.690875 containerd[1547]: time="2025-01-29T11:12:28.690841378Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 29 11:12:28.691833 containerd[1547]: time="2025-01-29T11:12:28.691797426Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:28.693662 containerd[1547]: time="2025-01-29T11:12:28.693628263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:28.694428 containerd[1547]: time="2025-01-29T11:12:28.694351137Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.522473148s" Jan 29 11:12:28.694428 containerd[1547]: time="2025-01-29T11:12:28.694383497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 11:12:28.712616 containerd[1547]: time="2025-01-29T11:12:28.712547964Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:12:29.439118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250629304.mount: Deactivated successfully. Jan 29 11:12:30.250870 containerd[1547]: time="2025-01-29T11:12:30.250823797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:30.251821 containerd[1547]: time="2025-01-29T11:12:30.251543472Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jan 29 11:12:30.254422 containerd[1547]: time="2025-01-29T11:12:30.252289910Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:30.255303 containerd[1547]: time="2025-01-29T11:12:30.255274985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:30.256443 containerd[1547]: time="2025-01-29T11:12:30.256389218Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.543804385s" Jan 29 11:12:30.256443 containerd[1547]: time="2025-01-29T11:12:30.256442308Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 11:12:30.274235 containerd[1547]: time="2025-01-29T11:12:30.274037775Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:12:30.838811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277179348.mount: Deactivated successfully. Jan 29 11:12:30.841587 containerd[1547]: time="2025-01-29T11:12:30.841528342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:30.843151 containerd[1547]: time="2025-01-29T11:12:30.843098120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jan 29 11:12:30.843760 containerd[1547]: time="2025-01-29T11:12:30.843703639Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:30.846017 containerd[1547]: time="2025-01-29T11:12:30.845982642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:30.847213 containerd[1547]: time="2025-01-29T11:12:30.847166596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 573.096074ms" Jan 29 11:12:30.847213 containerd[1547]: time="2025-01-29T11:12:30.847203771Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 11:12:30.866011 containerd[1547]: time="2025-01-29T11:12:30.865931620Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:12:31.579976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2226334296.mount: Deactivated successfully. Jan 29 11:12:33.559449 containerd[1547]: time="2025-01-29T11:12:33.559389988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:33.560376 containerd[1547]: time="2025-01-29T11:12:33.559762360Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jan 29 11:12:33.561130 containerd[1547]: time="2025-01-29T11:12:33.561083526Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:33.564433 containerd[1547]: time="2025-01-29T11:12:33.564383888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:12:33.565773 containerd[1547]: time="2025-01-29T11:12:33.565737832Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.699768152s" Jan 29 11:12:33.565811 containerd[1547]: time="2025-01-29T11:12:33.565773684Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 11:12:35.063917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:12:35.073593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:35.279157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:35.283768 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:12:35.320955 kubelet[2284]: E0129 11:12:35.320794 2284 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:12:35.323112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:12:35.323245 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:12:38.140206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:38.149604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:38.167565 systemd[1]: Reloading requested from client PID 2302 ('systemctl') (unit session-7.scope)... Jan 29 11:12:38.167588 systemd[1]: Reloading... Jan 29 11:12:38.231435 zram_generator::config[2344]: No configuration found. Jan 29 11:12:38.406641 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:12:38.464315 systemd[1]: Reloading finished in 296 ms. Jan 29 11:12:38.497205 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:12:38.497270 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:12:38.497518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:38.499723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:38.600550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:38.604111 (kubelet)[2399]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:12:38.645896 kubelet[2399]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:12:38.645896 kubelet[2399]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:12:38.645896 kubelet[2399]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:12:38.646254 kubelet[2399]: I0129 11:12:38.645991 2399 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:12:39.191445 kubelet[2399]: I0129 11:12:39.191388 2399 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:12:39.191445 kubelet[2399]: I0129 11:12:39.191432 2399 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:12:39.191650 kubelet[2399]: I0129 11:12:39.191631 2399 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:12:39.228698 kubelet[2399]: E0129 11:12:39.228670 2399 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:39.229211 kubelet[2399]: I0129 11:12:39.229189 2399 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:12:39.238324 kubelet[2399]: I0129 11:12:39.238282 2399 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:12:39.238828 kubelet[2399]: I0129 11:12:39.238788 2399 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:12:39.238977 kubelet[2399]: I0129 11:12:39.238816 2399 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:12:39.239064 kubelet[2399]: I0129 11:12:39.239036 2399 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:12:39.239064 kubelet[2399]: I0129 11:12:39.239046 2399 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:12:39.239304 kubelet[2399]: I0129 11:12:39.239276 2399 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:12:39.241973 kubelet[2399]: I0129 11:12:39.241935 2399 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:12:39.241973 kubelet[2399]: I0129 11:12:39.241963 2399 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:12:39.242282 kubelet[2399]: I0129 11:12:39.242269 2399 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:12:39.242491 kubelet[2399]: I0129 11:12:39.242391 2399 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:12:39.242943 kubelet[2399]: W0129 11:12:39.242885 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:39.242997 kubelet[2399]: E0129 11:12:39.242947 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:39.243033 kubelet[2399]: W0129 11:12:39.242996 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:39.243033 kubelet[2399]: E0129 11:12:39.243028 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:39.247434 kubelet[2399]: I0129 11:12:39.247126 2399 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:12:39.247562 kubelet[2399]: I0129 11:12:39.247543 2399 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:12:39.247656 kubelet[2399]: W0129 11:12:39.247645 2399 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:12:39.248522 kubelet[2399]: I0129 11:12:39.248383 2399 server.go:1264] "Started kubelet" Jan 29 11:12:39.249333 kubelet[2399]: I0129 11:12:39.249302 2399 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:12:39.251525 kubelet[2399]: I0129 11:12:39.251033 2399 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:12:39.251525 kubelet[2399]: I0129 11:12:39.251095 2399 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:12:39.251525 kubelet[2399]: I0129 11:12:39.251275 2399 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:12:39.252053 kubelet[2399]: E0129 11:12:39.251863 2399 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f257383a4776a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:12:39.248361322 +0000 UTC m=+0.641427796,LastTimestamp:2025-01-29 11:12:39.248361322 +0000 UTC m=+0.641427796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:12:39.252441 kubelet[2399]: E0129 11:12:39.252374 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:39.252503 kubelet[2399]: I0129 11:12:39.252495 2399 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:12:39.252606 kubelet[2399]: I0129 11:12:39.252592 2399 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:12:39.253228 kubelet[2399]: I0129 11:12:39.252605 2399 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:12:39.255265 kubelet[2399]: I0129 11:12:39.255213 2399 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:12:39.255443 kubelet[2399]: W0129 11:12:39.255376 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:39.255443 kubelet[2399]: E0129 11:12:39.255436 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:39.257720 kubelet[2399]: E0129 11:12:39.257658 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Jan 29 11:12:39.260061 kubelet[2399]: I0129 11:12:39.259694 2399 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:12:39.260061 kubelet[2399]: I0129 11:12:39.259803 2399 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:12:39.260820 kubelet[2399]: E0129 11:12:39.260793 2399 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:12:39.261811 kubelet[2399]: I0129 11:12:39.261780 2399 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:12:39.266778 kubelet[2399]: I0129 11:12:39.266743 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:12:39.267752 kubelet[2399]: I0129 11:12:39.267734 2399 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:12:39.267846 kubelet[2399]: I0129 11:12:39.267836 2399 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:12:39.267915 kubelet[2399]: I0129 11:12:39.267906 2399 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:12:39.268213 kubelet[2399]: E0129 11:12:39.267998 2399 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:12:39.275259 kubelet[2399]: W0129 11:12:39.273399 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:39.275259 kubelet[2399]: E0129 11:12:39.273482 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:39.279037 kubelet[2399]: I0129 11:12:39.279015 2399 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:12:39.279037 kubelet[2399]: I0129 11:12:39.279035 2399 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:12:39.279123 kubelet[2399]: I0129 11:12:39.279053 2399 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:12:39.281290 kubelet[2399]: I0129 11:12:39.281260 2399 policy_none.go:49] "None policy: Start" Jan 29 11:12:39.281891 kubelet[2399]: I0129 11:12:39.281846 2399 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:12:39.282228 kubelet[2399]: I0129 11:12:39.281878 2399 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:12:39.286887 kubelet[2399]: I0129 11:12:39.286862 2399 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:12:39.287173 kubelet[2399]: I0129 11:12:39.287137 2399 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:12:39.287315 kubelet[2399]: I0129 11:12:39.287303 2399 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:12:39.288468 kubelet[2399]: E0129 11:12:39.288451 2399 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:12:39.353703 kubelet[2399]: I0129 11:12:39.353674 2399 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:12:39.354050 kubelet[2399]: E0129 11:12:39.354024 2399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 11:12:39.368389 kubelet[2399]: I0129 11:12:39.368348 2399 topology_manager.go:215] "Topology Admit Handler" podUID="a9331a722030ced54eb5fbbfbecf0683" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:12:39.369428 kubelet[2399]: I0129 11:12:39.369286 2399 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:12:39.370276 kubelet[2399]: I0129 11:12:39.370222 2399 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:12:39.457259 kubelet[2399]: I0129 11:12:39.457088 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:39.457259 kubelet[2399]: I0129 11:12:39.457128 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:39.457259 kubelet[2399]: I0129 11:12:39.457156 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:12:39.457259 kubelet[2399]: I0129 11:12:39.457175 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9331a722030ced54eb5fbbfbecf0683-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9331a722030ced54eb5fbbfbecf0683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:39.457259 kubelet[2399]: I0129 11:12:39.457190 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9331a722030ced54eb5fbbfbecf0683-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9331a722030ced54eb5fbbfbecf0683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:39.457433 kubelet[2399]: I0129 11:12:39.457208 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9331a722030ced54eb5fbbfbecf0683-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a9331a722030ced54eb5fbbfbecf0683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:39.457433 kubelet[2399]: I0129 11:12:39.457224 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:39.457433 kubelet[2399]: I0129 11:12:39.457239 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:39.457433 kubelet[2399]: I0129 11:12:39.457253 2399 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:39.458800 kubelet[2399]: E0129 11:12:39.458737 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Jan 29 11:12:39.555868 kubelet[2399]: I0129 11:12:39.555815 2399 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:12:39.556192 kubelet[2399]: E0129 11:12:39.556158 2399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 11:12:39.674836 kubelet[2399]: E0129 11:12:39.674800 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:39.675583 kubelet[2399]: E0129 11:12:39.675189 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:39.675775 containerd[1547]: time="2025-01-29T11:12:39.675740343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:39.676236 containerd[1547]: time="2025-01-29T11:12:39.675794976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:39.676918 kubelet[2399]: E0129 11:12:39.676885 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:39.677607 containerd[1547]: time="2025-01-29T11:12:39.677522223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a9331a722030ced54eb5fbbfbecf0683,Namespace:kube-system,Attempt:0,}" Jan 29 11:12:39.860167 kubelet[2399]: E0129 11:12:39.860114 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Jan 29 11:12:39.957892 kubelet[2399]: I0129 11:12:39.957839 2399 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:12:39.958136 kubelet[2399]: E0129 11:12:39.958114 2399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 11:12:40.236808 kubelet[2399]: W0129 11:12:40.236647 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:40.236808 kubelet[2399]: E0129 11:12:40.236715 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:40.275425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount716529735.mount: Deactivated successfully. Jan 29 11:12:40.279099 containerd[1547]: time="2025-01-29T11:12:40.279048809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:12:40.280762 containerd[1547]: time="2025-01-29T11:12:40.280719402Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 11:12:40.281627 containerd[1547]: time="2025-01-29T11:12:40.281576882Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:12:40.284009 containerd[1547]: time="2025-01-29T11:12:40.283978610Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:12:40.285025 containerd[1547]: time="2025-01-29T11:12:40.285001247Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:12:40.285330 containerd[1547]: time="2025-01-29T11:12:40.285291550Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:12:40.285946 containerd[1547]: time="2025-01-29T11:12:40.285923519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:12:40.286462 containerd[1547]: time="2025-01-29T11:12:40.286427543Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:12:40.287186 containerd[1547]: time="2025-01-29T11:12:40.286932726Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 610.767025ms" Jan 29 11:12:40.292018 containerd[1547]: time="2025-01-29T11:12:40.291960134Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 614.382758ms" Jan 29 11:12:40.292720 containerd[1547]: time="2025-01-29T11:12:40.292693627Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 616.563096ms" Jan 29 11:12:40.391826 kubelet[2399]: W0129 11:12:40.391754 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:40.391826 kubelet[2399]: E0129 11:12:40.391825 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:40.430496 kubelet[2399]: W0129 11:12:40.430401 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:40.430496 kubelet[2399]: E0129 11:12:40.430487 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:40.500453 kubelet[2399]: W0129 11:12:40.500237 2399 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:40.500453 kubelet[2399]: E0129 11:12:40.500314 2399 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 11:12:40.545264 containerd[1547]: time="2025-01-29T11:12:40.545086376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:40.545883 containerd[1547]: time="2025-01-29T11:12:40.545839054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:40.545883 containerd[1547]: time="2025-01-29T11:12:40.545869711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:40.546013 containerd[1547]: time="2025-01-29T11:12:40.545959924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:40.546096 containerd[1547]: time="2025-01-29T11:12:40.545965440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:40.546316 containerd[1547]: time="2025-01-29T11:12:40.546269133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:40.546798 containerd[1547]: time="2025-01-29T11:12:40.546614515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:40.546798 containerd[1547]: time="2025-01-29T11:12:40.546737544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:40.548231 containerd[1547]: time="2025-01-29T11:12:40.547064020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:12:40.548231 containerd[1547]: time="2025-01-29T11:12:40.547683518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:12:40.548231 containerd[1547]: time="2025-01-29T11:12:40.547698107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:40.548231 containerd[1547]: time="2025-01-29T11:12:40.547806506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:12:40.591010 containerd[1547]: time="2025-01-29T11:12:40.590493654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"aeb236673afda61032866cefb6bbc1a8b866359323c7270a32b9a4aeabf8e981\"" Jan 29 11:12:40.592180 kubelet[2399]: E0129 11:12:40.592151 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:40.595316 containerd[1547]: time="2025-01-29T11:12:40.595275925Z" level=info msg="CreateContainer within sandbox \"aeb236673afda61032866cefb6bbc1a8b866359323c7270a32b9a4aeabf8e981\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:12:40.597346 containerd[1547]: time="2025-01-29T11:12:40.597270956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a9331a722030ced54eb5fbbfbecf0683,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1f652da8eb7228600fb4bec1dcad472738453a82d30f74b9642f6f34e3e2627\"" Jan 29 11:12:40.598215 kubelet[2399]: E0129 11:12:40.598144 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:40.600369 containerd[1547]: time="2025-01-29T11:12:40.600300935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6892c9d4db79c7ec3c8816115724cba34518dba368551bcdc9cd52d6478cc056\"" Jan 29 11:12:40.601024 kubelet[2399]: E0129 11:12:40.601003 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:40.602077 containerd[1547]: time="2025-01-29T11:12:40.601785668Z" level=info msg="CreateContainer within sandbox \"c1f652da8eb7228600fb4bec1dcad472738453a82d30f74b9642f6f34e3e2627\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:12:40.603041 containerd[1547]: time="2025-01-29T11:12:40.603013711Z" level=info msg="CreateContainer within sandbox \"6892c9d4db79c7ec3c8816115724cba34518dba368551bcdc9cd52d6478cc056\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:12:40.610839 containerd[1547]: time="2025-01-29T11:12:40.610797983Z" level=info msg="CreateContainer within sandbox \"aeb236673afda61032866cefb6bbc1a8b866359323c7270a32b9a4aeabf8e981\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a003a9609737920e544f28d4cc7514f6b3d4d3d51dadd007eb9b8b05c7e7c445\"" Jan 29 11:12:40.615066 containerd[1547]: time="2025-01-29T11:12:40.615023070Z" level=info msg="StartContainer for \"a003a9609737920e544f28d4cc7514f6b3d4d3d51dadd007eb9b8b05c7e7c445\"" Jan 29 11:12:40.619249 containerd[1547]: time="2025-01-29T11:12:40.619183685Z" level=info msg="CreateContainer within sandbox \"c1f652da8eb7228600fb4bec1dcad472738453a82d30f74b9642f6f34e3e2627\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"329066cdd59492d911eb1af02f882237f5d6f43be325ed37bfe08854a1d69eba\"" Jan 29 11:12:40.619668 containerd[1547]: time="2025-01-29T11:12:40.619642543Z" level=info msg="StartContainer for \"329066cdd59492d911eb1af02f882237f5d6f43be325ed37bfe08854a1d69eba\"" Jan 29 11:12:40.622339 containerd[1547]: time="2025-01-29T11:12:40.622293045Z" level=info msg="CreateContainer within sandbox \"6892c9d4db79c7ec3c8816115724cba34518dba368551bcdc9cd52d6478cc056\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"39003619dcf6762b133196fdbab9a14c974d8148523d041969fda2515c342784\"" Jan 29 11:12:40.622928 containerd[1547]: time="2025-01-29T11:12:40.622718568Z" level=info msg="StartContainer for \"39003619dcf6762b133196fdbab9a14c974d8148523d041969fda2515c342784\"" Jan 29 11:12:40.661056 kubelet[2399]: E0129 11:12:40.660940 2399 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" Jan 29 11:12:40.679130 containerd[1547]: time="2025-01-29T11:12:40.679012922Z" level=info msg="StartContainer for \"a003a9609737920e544f28d4cc7514f6b3d4d3d51dadd007eb9b8b05c7e7c445\" returns successfully" Jan 29 11:12:40.686177 containerd[1547]: time="2025-01-29T11:12:40.686064620Z" level=info msg="StartContainer for \"39003619dcf6762b133196fdbab9a14c974d8148523d041969fda2515c342784\" returns successfully" Jan 29 11:12:40.686177 containerd[1547]: time="2025-01-29T11:12:40.686129612Z" level=info msg="StartContainer for \"329066cdd59492d911eb1af02f882237f5d6f43be325ed37bfe08854a1d69eba\" returns successfully" Jan 29 11:12:40.760960 kubelet[2399]: I0129 11:12:40.760764 2399 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:12:40.761590 kubelet[2399]: E0129 11:12:40.761541 2399 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 11:12:41.293306 kubelet[2399]: E0129 11:12:41.293276 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:41.296264 kubelet[2399]: E0129 11:12:41.296024 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:41.296652 kubelet[2399]: E0129 11:12:41.296622 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:42.264460 kubelet[2399]: E0129 11:12:42.264394 2399 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:12:42.297660 kubelet[2399]: E0129 11:12:42.297379 2399 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 29 11:12:42.300697 kubelet[2399]: E0129 11:12:42.300501 2399 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:42.364188 kubelet[2399]: I0129 11:12:42.364150 2399 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:12:42.372352 kubelet[2399]: I0129 11:12:42.372330 2399 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:12:42.379154 kubelet[2399]: E0129 11:12:42.379106 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:42.479850 kubelet[2399]: E0129 11:12:42.479814 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:42.580528 kubelet[2399]: E0129 11:12:42.580499 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:42.681053 kubelet[2399]: E0129 11:12:42.681019 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:42.781684 kubelet[2399]: E0129 11:12:42.781657 2399 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:12:43.244468 kubelet[2399]: I0129 11:12:43.244424 2399 apiserver.go:52] "Watching apiserver" Jan 29 11:12:43.253248 kubelet[2399]: I0129 11:12:43.253187 2399 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:12:44.076066 systemd[1]: Reloading requested from client PID 2677 ('systemctl') (unit session-7.scope)... Jan 29 11:12:44.076084 systemd[1]: Reloading... Jan 29 11:12:44.144440 zram_generator::config[2719]: No configuration found. Jan 29 11:12:44.233773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:12:44.287847 systemd[1]: Reloading finished in 211 ms. Jan 29 11:12:44.310260 kubelet[2399]: I0129 11:12:44.310223 2399 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:12:44.310286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:44.320632 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:12:44.320914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:44.331873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:12:44.415715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:12:44.419928 (kubelet)[2768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:12:44.454744 kubelet[2768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:12:44.454744 kubelet[2768]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:12:44.454744 kubelet[2768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:12:44.455078 kubelet[2768]: I0129 11:12:44.454780 2768 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:12:44.458327 kubelet[2768]: I0129 11:12:44.458305 2768 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:12:44.458327 kubelet[2768]: I0129 11:12:44.458327 2768 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:12:44.458529 kubelet[2768]: I0129 11:12:44.458513 2768 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:12:44.460630 kubelet[2768]: I0129 11:12:44.460608 2768 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:12:44.462297 kubelet[2768]: I0129 11:12:44.462167 2768 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:12:44.470713 kubelet[2768]: I0129 11:12:44.470682 2768 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:12:44.471155 kubelet[2768]: I0129 11:12:44.471126 2768 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:12:44.472380 kubelet[2768]: I0129 11:12:44.471390 2768 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:12:44.472380 kubelet[2768]: I0129 11:12:44.471583 2768 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:12:44.472380 kubelet[2768]: I0129 11:12:44.471593 2768 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:12:44.472380 kubelet[2768]: I0129 11:12:44.471627 2768 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:12:44.472380 kubelet[2768]: I0129 11:12:44.471725 2768 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:12:44.472765 kubelet[2768]: I0129 11:12:44.471736 2768 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:12:44.472765 kubelet[2768]: I0129 11:12:44.471761 2768 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:12:44.472765 kubelet[2768]: I0129 11:12:44.471773 2768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:12:44.473136 kubelet[2768]: I0129 11:12:44.472934 2768 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:12:44.473136 kubelet[2768]: I0129 11:12:44.473099 2768 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:12:44.474027 kubelet[2768]: I0129 11:12:44.473600 2768 server.go:1264] "Started kubelet" Jan 29 11:12:44.476411 kubelet[2768]: I0129 11:12:44.474302 2768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:12:44.476411 kubelet[2768]: I0129 11:12:44.474579 2768 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:12:44.476411 kubelet[2768]: I0129 11:12:44.474612 2768 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:12:44.476411 kubelet[2768]: I0129 11:12:44.475365 2768 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:12:44.476897 kubelet[2768]: I0129 11:12:44.476876 2768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:12:44.478953 kubelet[2768]: I0129 11:12:44.478310 2768 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:12:44.478953 kubelet[2768]: I0129 11:12:44.478396 2768 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:12:44.478953 kubelet[2768]: I0129 11:12:44.478542 2768 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:12:44.496888 kubelet[2768]: I0129 11:12:44.496826 2768 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:12:44.497500 kubelet[2768]: I0129 11:12:44.496526 2768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:12:44.499001 kubelet[2768]: I0129 11:12:44.498744 2768 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:12:44.499348 kubelet[2768]: E0129 11:12:44.499329 2768 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:12:44.499621 kubelet[2768]: I0129 11:12:44.499599 2768 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:12:44.499667 kubelet[2768]: I0129 11:12:44.499634 2768 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:12:44.500128 kubelet[2768]: E0129 11:12:44.500089 2768 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:12:44.502257 kubelet[2768]: I0129 11:12:44.501947 2768 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:12:44.502257 kubelet[2768]: I0129 11:12:44.501971 2768 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:12:44.535926 kubelet[2768]: I0129 11:12:44.535904 2768 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:12:44.536346 kubelet[2768]: I0129 11:12:44.536080 2768 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:12:44.536346 kubelet[2768]: I0129 11:12:44.536105 2768 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:12:44.536346 kubelet[2768]: I0129 11:12:44.536240 2768 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:12:44.536346 kubelet[2768]: I0129 11:12:44.536250 2768 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:12:44.536346 kubelet[2768]: I0129 11:12:44.536269 2768 policy_none.go:49] "None policy: Start" Jan 29 11:12:44.536864 kubelet[2768]: I0129 11:12:44.536847 2768 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:12:44.537467 kubelet[2768]: I0129 11:12:44.536942 2768 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:12:44.537467 kubelet[2768]: I0129 11:12:44.537079 2768 state_mem.go:75] "Updated machine memory state" Jan 29 11:12:44.538131 kubelet[2768]: I0129 11:12:44.538101 2768 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:12:44.538289 kubelet[2768]: I0129 11:12:44.538252 2768 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:12:44.538358 kubelet[2768]: I0129 11:12:44.538342 2768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:12:44.585022 kubelet[2768]: I0129 11:12:44.583871 2768 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 11:12:44.590743 kubelet[2768]: I0129 11:12:44.590715 2768 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 11:12:44.590821 kubelet[2768]: I0129 11:12:44.590792 2768 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 11:12:44.600500 kubelet[2768]: I0129 11:12:44.600451 2768 topology_manager.go:215] "Topology Admit Handler" podUID="a9331a722030ced54eb5fbbfbecf0683" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 11:12:44.600594 kubelet[2768]: I0129 11:12:44.600548 2768 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 11:12:44.600594 kubelet[2768]: I0129 11:12:44.600586 2768 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 11:12:44.780184 kubelet[2768]: I0129 11:12:44.780131 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9331a722030ced54eb5fbbfbecf0683-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a9331a722030ced54eb5fbbfbecf0683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:44.780314 kubelet[2768]: I0129 11:12:44.780192 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:44.780314 kubelet[2768]: I0129 11:12:44.780228 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:44.780314 kubelet[2768]: I0129 11:12:44.780252 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:12:44.780314 kubelet[2768]: I0129 11:12:44.780272 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9331a722030ced54eb5fbbfbecf0683-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9331a722030ced54eb5fbbfbecf0683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:44.780314 kubelet[2768]: I0129 11:12:44.780307 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9331a722030ced54eb5fbbfbecf0683-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a9331a722030ced54eb5fbbfbecf0683\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:44.780443 kubelet[2768]: I0129 11:12:44.780324 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:44.780443 kubelet[2768]: I0129 11:12:44.780339 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:44.780443 kubelet[2768]: I0129 11:12:44.780384 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:12:44.932263 kubelet[2768]: E0129 11:12:44.931763 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:44.932263 kubelet[2768]: E0129 11:12:44.931767 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:44.932263 kubelet[2768]: E0129 11:12:44.931933 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:45.472686 kubelet[2768]: I0129 11:12:45.472491 2768 apiserver.go:52] "Watching apiserver" Jan 29 11:12:45.479288 kubelet[2768]: I0129 11:12:45.479248 2768 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:12:45.515366 kubelet[2768]: E0129 11:12:45.514957 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:45.515366 kubelet[2768]: E0129 11:12:45.515170 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:45.521516 kubelet[2768]: E0129 11:12:45.521062 2768 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:12:45.521516 kubelet[2768]: E0129 11:12:45.521491 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:45.537632 kubelet[2768]: I0129 11:12:45.537585 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5375531420000002 podStartE2EDuration="1.537553142s" podCreationTimestamp="2025-01-29 11:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:45.531431239 +0000 UTC m=+1.108812254" watchObservedRunningTime="2025-01-29 11:12:45.537553142 +0000 UTC m=+1.114934157" Jan 29 11:12:45.537762 kubelet[2768]: I0129 11:12:45.537677 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.537672097 podStartE2EDuration="1.537672097s" podCreationTimestamp="2025-01-29 11:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:45.537468854 +0000 UTC m=+1.114849869" watchObservedRunningTime="2025-01-29 11:12:45.537672097 +0000 UTC m=+1.115053072" Jan 29 11:12:46.528570 kubelet[2768]: E0129 11:12:46.528535 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:46.529085 kubelet[2768]: E0129 11:12:46.529035 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:49.206992 kubelet[2768]: E0129 11:12:49.206947 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:49.325871 sudo[1751]: pam_unix(sudo:session): session closed for user root Jan 29 11:12:49.327555 sshd[1750]: Connection closed by 10.0.0.1 port 36850 Jan 29 11:12:49.328085 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Jan 29 11:12:49.333497 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:36850.service: Deactivated successfully. Jan 29 11:12:49.335251 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:12:49.335816 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:12:49.336636 systemd-logind[1528]: Removed session 7. Jan 29 11:12:53.889014 kubelet[2768]: E0129 11:12:53.888971 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:53.907637 kubelet[2768]: I0129 11:12:53.907589 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=9.907576058 podStartE2EDuration="9.907576058s" podCreationTimestamp="2025-01-29 11:12:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:12:45.545952256 +0000 UTC m=+1.123333271" watchObservedRunningTime="2025-01-29 11:12:53.907576058 +0000 UTC m=+9.484957073" Jan 29 11:12:54.532479 kubelet[2768]: E0129 11:12:54.532388 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:56.466050 kubelet[2768]: E0129 11:12:56.466002 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:12:58.123736 update_engine[1532]: I20250129 11:12:58.123670 1532 update_attempter.cc:509] Updating boot flags... Jan 29 11:12:58.150277 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2864) Jan 29 11:12:59.213951 kubelet[2768]: E0129 11:12:59.213644 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:00.600162 kubelet[2768]: I0129 11:13:00.599812 2768 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:13:00.612392 containerd[1547]: time="2025-01-29T11:13:00.612336741Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:13:00.612776 kubelet[2768]: I0129 11:13:00.612663 2768 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:13:01.357821 kubelet[2768]: I0129 11:13:01.357743 2768 topology_manager.go:215] "Topology Admit Handler" podUID="7ce95160-c31d-442e-965d-b08f9272247c" podNamespace="kube-system" podName="kube-proxy-c54d6" Jan 29 11:13:01.401348 kubelet[2768]: I0129 11:13:01.401304 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ce95160-c31d-442e-965d-b08f9272247c-xtables-lock\") pod \"kube-proxy-c54d6\" (UID: \"7ce95160-c31d-442e-965d-b08f9272247c\") " pod="kube-system/kube-proxy-c54d6" Jan 29 11:13:01.401348 kubelet[2768]: I0129 11:13:01.401351 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp48p\" (UniqueName: \"kubernetes.io/projected/7ce95160-c31d-442e-965d-b08f9272247c-kube-api-access-gp48p\") pod \"kube-proxy-c54d6\" (UID: \"7ce95160-c31d-442e-965d-b08f9272247c\") " pod="kube-system/kube-proxy-c54d6" Jan 29 11:13:01.401528 kubelet[2768]: I0129 11:13:01.401382 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ce95160-c31d-442e-965d-b08f9272247c-kube-proxy\") pod \"kube-proxy-c54d6\" (UID: \"7ce95160-c31d-442e-965d-b08f9272247c\") " pod="kube-system/kube-proxy-c54d6" Jan 29 11:13:01.401528 kubelet[2768]: I0129 11:13:01.401397 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ce95160-c31d-442e-965d-b08f9272247c-lib-modules\") pod \"kube-proxy-c54d6\" (UID: \"7ce95160-c31d-442e-965d-b08f9272247c\") " pod="kube-system/kube-proxy-c54d6" Jan 29 11:13:01.465769 kubelet[2768]: I0129 11:13:01.465290 2768 topology_manager.go:215] "Topology Admit Handler" podUID="707cd4b1-7cfd-43c6-b9c1-e1494111aa8f" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-k5r85" Jan 29 11:13:01.502113 kubelet[2768]: I0129 11:13:01.501807 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk5c5\" (UniqueName: \"kubernetes.io/projected/707cd4b1-7cfd-43c6-b9c1-e1494111aa8f-kube-api-access-gk5c5\") pod \"tigera-operator-7bc55997bb-k5r85\" (UID: \"707cd4b1-7cfd-43c6-b9c1-e1494111aa8f\") " pod="tigera-operator/tigera-operator-7bc55997bb-k5r85" Jan 29 11:13:01.502113 kubelet[2768]: I0129 11:13:01.501867 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/707cd4b1-7cfd-43c6-b9c1-e1494111aa8f-var-lib-calico\") pod \"tigera-operator-7bc55997bb-k5r85\" (UID: \"707cd4b1-7cfd-43c6-b9c1-e1494111aa8f\") " pod="tigera-operator/tigera-operator-7bc55997bb-k5r85" Jan 29 11:13:01.663399 kubelet[2768]: E0129 11:13:01.663281 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:01.663897 containerd[1547]: time="2025-01-29T11:13:01.663853224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c54d6,Uid:7ce95160-c31d-442e-965d-b08f9272247c,Namespace:kube-system,Attempt:0,}" Jan 29 11:13:01.683688 containerd[1547]: time="2025-01-29T11:13:01.683602856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:01.683688 containerd[1547]: time="2025-01-29T11:13:01.683662696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:01.683891 containerd[1547]: time="2025-01-29T11:13:01.683676856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:01.683891 containerd[1547]: time="2025-01-29T11:13:01.683797856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:01.714931 containerd[1547]: time="2025-01-29T11:13:01.714824449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c54d6,Uid:7ce95160-c31d-442e-965d-b08f9272247c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a9277aeed793a8b5aa66060d4ca4d6d0c3024bdfde8e0a894dd4064e83d22dd3\"" Jan 29 11:13:01.717543 kubelet[2768]: E0129 11:13:01.717518 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:01.724043 containerd[1547]: time="2025-01-29T11:13:01.722891478Z" level=info msg="CreateContainer within sandbox \"a9277aeed793a8b5aa66060d4ca4d6d0c3024bdfde8e0a894dd4064e83d22dd3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:13:01.750421 containerd[1547]: time="2025-01-29T11:13:01.750362978Z" level=info msg="CreateContainer within sandbox \"a9277aeed793a8b5aa66060d4ca4d6d0c3024bdfde8e0a894dd4064e83d22dd3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ce876ed492a35d8e7df9bc502d15c1548bba33a9ed484ff7497506e09a55d7dd\"" Jan 29 11:13:01.751070 containerd[1547]: time="2025-01-29T11:13:01.751024300Z" level=info msg="StartContainer for \"ce876ed492a35d8e7df9bc502d15c1548bba33a9ed484ff7497506e09a55d7dd\"" Jan 29 11:13:01.771929 containerd[1547]: time="2025-01-29T11:13:01.771884456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-k5r85,Uid:707cd4b1-7cfd-43c6-b9c1-e1494111aa8f,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:13:01.792260 containerd[1547]: time="2025-01-29T11:13:01.792167969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:01.792260 containerd[1547]: time="2025-01-29T11:13:01.792235770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:01.792260 containerd[1547]: time="2025-01-29T11:13:01.792250890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:01.792512 containerd[1547]: time="2025-01-29T11:13:01.792352730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:01.807658 containerd[1547]: time="2025-01-29T11:13:01.807608145Z" level=info msg="StartContainer for \"ce876ed492a35d8e7df9bc502d15c1548bba33a9ed484ff7497506e09a55d7dd\" returns successfully" Jan 29 11:13:01.841608 containerd[1547]: time="2025-01-29T11:13:01.841557628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-k5r85,Uid:707cd4b1-7cfd-43c6-b9c1-e1494111aa8f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f52c5541d6a20413eb5240ccf7f4b136e71cee23f6e6527cea279535387537e7\"" Jan 29 11:13:01.845911 containerd[1547]: time="2025-01-29T11:13:01.845856764Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:13:02.546245 kubelet[2768]: E0129 11:13:02.546213 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:02.554396 kubelet[2768]: I0129 11:13:02.554324 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c54d6" podStartSLOduration=1.5543095980000001 podStartE2EDuration="1.554309598s" podCreationTimestamp="2025-01-29 11:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:13:02.554223358 +0000 UTC m=+18.131604373" watchObservedRunningTime="2025-01-29 11:13:02.554309598 +0000 UTC m=+18.131690613" Jan 29 11:13:03.929190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932618277.mount: Deactivated successfully. Jan 29 11:13:04.388631 containerd[1547]: time="2025-01-29T11:13:04.388581776Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:04.389164 containerd[1547]: time="2025-01-29T11:13:04.389107018Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 11:13:04.389994 containerd[1547]: time="2025-01-29T11:13:04.389958860Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:04.392003 containerd[1547]: time="2025-01-29T11:13:04.391965067Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:04.393445 containerd[1547]: time="2025-01-29T11:13:04.393383511Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.547474547s" Jan 29 11:13:04.393490 containerd[1547]: time="2025-01-29T11:13:04.393445551Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 11:13:04.415387 containerd[1547]: time="2025-01-29T11:13:04.407300795Z" level=info msg="CreateContainer within sandbox \"f52c5541d6a20413eb5240ccf7f4b136e71cee23f6e6527cea279535387537e7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:13:04.446053 containerd[1547]: time="2025-01-29T11:13:04.446007597Z" level=info msg="CreateContainer within sandbox \"f52c5541d6a20413eb5240ccf7f4b136e71cee23f6e6527cea279535387537e7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1eb1754873aa5d833fa43744423028c48cd89950909b9cb29309529fcf635dd2\"" Jan 29 11:13:04.446655 containerd[1547]: time="2025-01-29T11:13:04.446564998Z" level=info msg="StartContainer for \"1eb1754873aa5d833fa43744423028c48cd89950909b9cb29309529fcf635dd2\"" Jan 29 11:13:04.548580 containerd[1547]: time="2025-01-29T11:13:04.548535919Z" level=info msg="StartContainer for \"1eb1754873aa5d833fa43744423028c48cd89950909b9cb29309529fcf635dd2\" returns successfully" Jan 29 11:13:04.589899 kubelet[2768]: I0129 11:13:04.589600 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-k5r85" podStartSLOduration=1.0383172489999999 podStartE2EDuration="3.589580848s" podCreationTimestamp="2025-01-29 11:13:01 +0000 UTC" firstStartedPulling="2025-01-29 11:13:01.844391919 +0000 UTC m=+17.421772934" lastFinishedPulling="2025-01-29 11:13:04.395655518 +0000 UTC m=+19.973036533" observedRunningTime="2025-01-29 11:13:04.588712846 +0000 UTC m=+20.166093861" watchObservedRunningTime="2025-01-29 11:13:04.589580848 +0000 UTC m=+20.166961863" Jan 29 11:13:04.909106 systemd[1]: run-containerd-runc-k8s.io-1eb1754873aa5d833fa43744423028c48cd89950909b9cb29309529fcf635dd2-runc.XFF93k.mount: Deactivated successfully. Jan 29 11:13:09.423329 kubelet[2768]: I0129 11:13:09.419183 2768 topology_manager.go:215] "Topology Admit Handler" podUID="7790d108-cf28-4632-a0a9-57b5fdbb5e44" podNamespace="calico-system" podName="calico-typha-c8d76d456-nd48r" Jan 29 11:13:09.459824 kubelet[2768]: I0129 11:13:09.459780 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7790d108-cf28-4632-a0a9-57b5fdbb5e44-tigera-ca-bundle\") pod \"calico-typha-c8d76d456-nd48r\" (UID: \"7790d108-cf28-4632-a0a9-57b5fdbb5e44\") " pod="calico-system/calico-typha-c8d76d456-nd48r" Jan 29 11:13:09.459824 kubelet[2768]: I0129 11:13:09.459823 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7790d108-cf28-4632-a0a9-57b5fdbb5e44-typha-certs\") pod \"calico-typha-c8d76d456-nd48r\" (UID: \"7790d108-cf28-4632-a0a9-57b5fdbb5e44\") " pod="calico-system/calico-typha-c8d76d456-nd48r" Jan 29 11:13:09.459973 kubelet[2768]: I0129 11:13:09.459843 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kztz4\" (UniqueName: \"kubernetes.io/projected/7790d108-cf28-4632-a0a9-57b5fdbb5e44-kube-api-access-kztz4\") pod \"calico-typha-c8d76d456-nd48r\" (UID: \"7790d108-cf28-4632-a0a9-57b5fdbb5e44\") " pod="calico-system/calico-typha-c8d76d456-nd48r" Jan 29 11:13:09.604450 kubelet[2768]: I0129 11:13:09.604389 2768 topology_manager.go:215] "Topology Admit Handler" podUID="7b9182d0-5181-480d-827e-dc9343f89ad5" podNamespace="calico-system" podName="calico-node-gmkgp" Jan 29 11:13:09.661260 kubelet[2768]: I0129 11:13:09.661211 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7b9182d0-5181-480d-827e-dc9343f89ad5-cni-log-dir\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.661260 kubelet[2768]: I0129 11:13:09.661261 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7b9182d0-5181-480d-827e-dc9343f89ad5-var-run-calico\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.661500 kubelet[2768]: I0129 11:13:09.661279 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7b9182d0-5181-480d-827e-dc9343f89ad5-cni-bin-dir\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.661500 kubelet[2768]: I0129 11:13:09.661295 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7b9182d0-5181-480d-827e-dc9343f89ad5-cni-net-dir\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.661500 kubelet[2768]: I0129 11:13:09.661332 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftlcf\" (UniqueName: \"kubernetes.io/projected/7b9182d0-5181-480d-827e-dc9343f89ad5-kube-api-access-ftlcf\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.661500 kubelet[2768]: I0129 11:13:09.661354 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b9182d0-5181-480d-827e-dc9343f89ad5-var-lib-calico\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.661500 kubelet[2768]: I0129 11:13:09.661419 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b9182d0-5181-480d-827e-dc9343f89ad5-lib-modules\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.661612 kubelet[2768]: I0129 11:13:09.661448 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7b9182d0-5181-480d-827e-dc9343f89ad5-node-certs\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.661612 kubelet[2768]: I0129 11:13:09.661473 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7b9182d0-5181-480d-827e-dc9343f89ad5-policysync\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.661612 kubelet[2768]: I0129 11:13:09.661504 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b9182d0-5181-480d-827e-dc9343f89ad5-xtables-lock\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.661612 kubelet[2768]: I0129 11:13:09.661558 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7b9182d0-5181-480d-827e-dc9343f89ad5-flexvol-driver-host\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.661612 kubelet[2768]: I0129 11:13:09.661602 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b9182d0-5181-480d-827e-dc9343f89ad5-tigera-ca-bundle\") pod \"calico-node-gmkgp\" (UID: \"7b9182d0-5181-480d-827e-dc9343f89ad5\") " pod="calico-system/calico-node-gmkgp" Jan 29 11:13:09.725468 kubelet[2768]: E0129 11:13:09.724941 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:09.725908 containerd[1547]: time="2025-01-29T11:13:09.725868796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c8d76d456-nd48r,Uid:7790d108-cf28-4632-a0a9-57b5fdbb5e44,Namespace:calico-system,Attempt:0,}" Jan 29 11:13:09.746824 containerd[1547]: time="2025-01-29T11:13:09.746653448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:09.746824 containerd[1547]: time="2025-01-29T11:13:09.746700648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:09.746824 containerd[1547]: time="2025-01-29T11:13:09.746710888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:09.746961 containerd[1547]: time="2025-01-29T11:13:09.746790928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:09.762472 kubelet[2768]: E0129 11:13:09.762450 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.762472 kubelet[2768]: W0129 11:13:09.762470 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.762577 kubelet[2768]: E0129 11:13:09.762558 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.763855 kubelet[2768]: E0129 11:13:09.763829 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.763855 kubelet[2768]: W0129 11:13:09.763849 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.763978 kubelet[2768]: E0129 11:13:09.763961 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.764199 kubelet[2768]: E0129 11:13:09.764185 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.764240 kubelet[2768]: W0129 11:13:09.764200 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.764269 kubelet[2768]: E0129 11:13:09.764255 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.764478 kubelet[2768]: E0129 11:13:09.764464 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.764519 kubelet[2768]: W0129 11:13:09.764478 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.764548 kubelet[2768]: E0129 11:13:09.764525 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.765149 kubelet[2768]: E0129 11:13:09.765132 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.765149 kubelet[2768]: W0129 11:13:09.765148 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.765241 kubelet[2768]: E0129 11:13:09.765207 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.765442 kubelet[2768]: E0129 11:13:09.765427 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.765442 kubelet[2768]: W0129 11:13:09.765441 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.766699 kubelet[2768]: E0129 11:13:09.766683 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.770915 kubelet[2768]: E0129 11:13:09.770862 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.770970 kubelet[2768]: W0129 11:13:09.770915 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.771035 kubelet[2768]: E0129 11:13:09.771018 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.771166 kubelet[2768]: E0129 11:13:09.771153 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.771191 kubelet[2768]: W0129 11:13:09.771167 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.771219 kubelet[2768]: E0129 11:13:09.771208 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.771412 kubelet[2768]: E0129 11:13:09.771386 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.771448 kubelet[2768]: W0129 11:13:09.771399 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.771477 kubelet[2768]: E0129 11:13:09.771461 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.771627 kubelet[2768]: E0129 11:13:09.771610 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.771654 kubelet[2768]: W0129 11:13:09.771626 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.771680 kubelet[2768]: E0129 11:13:09.771662 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.771797 kubelet[2768]: E0129 11:13:09.771787 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.771819 kubelet[2768]: W0129 11:13:09.771797 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.771847 kubelet[2768]: E0129 11:13:09.771832 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.771967 kubelet[2768]: E0129 11:13:09.771957 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.771990 kubelet[2768]: W0129 11:13:09.771968 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.772016 kubelet[2768]: E0129 11:13:09.772002 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.772257 kubelet[2768]: E0129 11:13:09.772211 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.772257 kubelet[2768]: W0129 11:13:09.772231 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.772319 kubelet[2768]: E0129 11:13:09.772270 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.772438 kubelet[2768]: E0129 11:13:09.772427 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.772438 kubelet[2768]: W0129 11:13:09.772438 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.772488 kubelet[2768]: E0129 11:13:09.772465 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.772607 kubelet[2768]: E0129 11:13:09.772594 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.772632 kubelet[2768]: W0129 11:13:09.772607 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.772652 kubelet[2768]: E0129 11:13:09.772639 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.772795 kubelet[2768]: E0129 11:13:09.772777 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.772795 kubelet[2768]: W0129 11:13:09.772791 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.772891 kubelet[2768]: E0129 11:13:09.772877 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.773006 kubelet[2768]: E0129 11:13:09.772994 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.773027 kubelet[2768]: W0129 11:13:09.773007 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.773115 kubelet[2768]: E0129 11:13:09.773104 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.773190 kubelet[2768]: E0129 11:13:09.773182 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.773213 kubelet[2768]: W0129 11:13:09.773191 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.773288 kubelet[2768]: E0129 11:13:09.773275 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.773349 kubelet[2768]: E0129 11:13:09.773339 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.773373 kubelet[2768]: W0129 11:13:09.773349 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.773532 kubelet[2768]: E0129 11:13:09.773453 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.773790 kubelet[2768]: E0129 11:13:09.773774 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.773820 kubelet[2768]: W0129 11:13:09.773789 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.773840 kubelet[2768]: E0129 11:13:09.773819 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.774286 kubelet[2768]: E0129 11:13:09.774015 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.774286 kubelet[2768]: W0129 11:13:09.774027 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.774286 kubelet[2768]: E0129 11:13:09.774058 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.774286 kubelet[2768]: E0129 11:13:09.774192 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.774286 kubelet[2768]: W0129 11:13:09.774200 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.774286 kubelet[2768]: E0129 11:13:09.774240 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.776102 kubelet[2768]: E0129 11:13:09.775746 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.776194 kubelet[2768]: W0129 11:13:09.776106 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.776194 kubelet[2768]: E0129 11:13:09.776164 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.776734 kubelet[2768]: E0129 11:13:09.776679 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.777818 kubelet[2768]: W0129 11:13:09.777777 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.777906 kubelet[2768]: E0129 11:13:09.777833 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.778132 kubelet[2768]: E0129 11:13:09.778114 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.778169 kubelet[2768]: W0129 11:13:09.778131 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.778258 kubelet[2768]: E0129 11:13:09.778241 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.778447 kubelet[2768]: E0129 11:13:09.778433 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.778481 kubelet[2768]: W0129 11:13:09.778447 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.778579 kubelet[2768]: E0129 11:13:09.778564 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.778950 kubelet[2768]: E0129 11:13:09.778750 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.778950 kubelet[2768]: W0129 11:13:09.778780 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.778950 kubelet[2768]: E0129 11:13:09.778910 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.779869 kubelet[2768]: E0129 11:13:09.779188 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.779922 kubelet[2768]: W0129 11:13:09.779875 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.779960 kubelet[2768]: E0129 11:13:09.779939 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.780665 kubelet[2768]: E0129 11:13:09.780644 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.780665 kubelet[2768]: W0129 11:13:09.780663 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.780879 kubelet[2768]: E0129 11:13:09.780860 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.782097 kubelet[2768]: E0129 11:13:09.782070 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.782132 kubelet[2768]: W0129 11:13:09.782099 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.782197 kubelet[2768]: E0129 11:13:09.782174 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.782372 kubelet[2768]: E0129 11:13:09.782354 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.782372 kubelet[2768]: W0129 11:13:09.782368 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.782454 kubelet[2768]: E0129 11:13:09.782381 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.802458 containerd[1547]: time="2025-01-29T11:13:09.802154788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c8d76d456-nd48r,Uid:7790d108-cf28-4632-a0a9-57b5fdbb5e44,Namespace:calico-system,Attempt:0,} returns sandbox id \"6444556142800573d127fbad77f2c46873e20401e28c94adc9a7113500a76478\"" Jan 29 11:13:09.806995 kubelet[2768]: E0129 11:13:09.806971 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:09.807962 containerd[1547]: time="2025-01-29T11:13:09.807919243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:13:09.813247 kubelet[2768]: I0129 11:13:09.813209 2768 topology_manager.go:215] "Topology Admit Handler" podUID="e5725830-f2eb-461f-bca5-bd9a3c65abd6" podNamespace="calico-system" podName="csi-node-driver-whhjq" Jan 29 11:13:09.813510 kubelet[2768]: E0129 11:13:09.813487 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whhjq" podUID="e5725830-f2eb-461f-bca5-bd9a3c65abd6" Jan 29 11:13:09.836097 kubelet[2768]: E0129 11:13:09.836057 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.836097 kubelet[2768]: W0129 11:13:09.836080 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.836097 kubelet[2768]: E0129 11:13:09.836107 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.836320 kubelet[2768]: E0129 11:13:09.836298 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.836320 kubelet[2768]: W0129 11:13:09.836311 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.836320 kubelet[2768]: E0129 11:13:09.836319 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.836477 kubelet[2768]: E0129 11:13:09.836460 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.836477 kubelet[2768]: W0129 11:13:09.836470 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.836527 kubelet[2768]: E0129 11:13:09.836479 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.836618 kubelet[2768]: E0129 11:13:09.836600 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.836618 kubelet[2768]: W0129 11:13:09.836610 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.836665 kubelet[2768]: E0129 11:13:09.836619 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.836761 kubelet[2768]: E0129 11:13:09.836750 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.836785 kubelet[2768]: W0129 11:13:09.836760 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.836785 kubelet[2768]: E0129 11:13:09.836767 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.836897 kubelet[2768]: E0129 11:13:09.836884 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.836897 kubelet[2768]: W0129 11:13:09.836894 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.836995 kubelet[2768]: E0129 11:13:09.836901 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.837032 kubelet[2768]: E0129 11:13:09.837017 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.837032 kubelet[2768]: W0129 11:13:09.837026 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.837082 kubelet[2768]: E0129 11:13:09.837034 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.837157 kubelet[2768]: E0129 11:13:09.837148 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.837157 kubelet[2768]: W0129 11:13:09.837157 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.837201 kubelet[2768]: E0129 11:13:09.837164 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.837318 kubelet[2768]: E0129 11:13:09.837301 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.837318 kubelet[2768]: W0129 11:13:09.837311 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.837318 kubelet[2768]: E0129 11:13:09.837318 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.837460 kubelet[2768]: E0129 11:13:09.837450 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.837460 kubelet[2768]: W0129 11:13:09.837459 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.837508 kubelet[2768]: E0129 11:13:09.837466 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.837642 kubelet[2768]: E0129 11:13:09.837620 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.837642 kubelet[2768]: W0129 11:13:09.837631 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.837642 kubelet[2768]: E0129 11:13:09.837639 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.837782 kubelet[2768]: E0129 11:13:09.837771 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.837782 kubelet[2768]: W0129 11:13:09.837781 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.837821 kubelet[2768]: E0129 11:13:09.837790 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.837927 kubelet[2768]: E0129 11:13:09.837917 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.837927 kubelet[2768]: W0129 11:13:09.837926 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.837984 kubelet[2768]: E0129 11:13:09.837933 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.838062 kubelet[2768]: E0129 11:13:09.838050 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.838062 kubelet[2768]: W0129 11:13:09.838059 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.838112 kubelet[2768]: E0129 11:13:09.838066 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.838193 kubelet[2768]: E0129 11:13:09.838182 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.838193 kubelet[2768]: W0129 11:13:09.838190 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.838246 kubelet[2768]: E0129 11:13:09.838197 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.838331 kubelet[2768]: E0129 11:13:09.838320 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.838331 kubelet[2768]: W0129 11:13:09.838329 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.838375 kubelet[2768]: E0129 11:13:09.838338 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.838488 kubelet[2768]: E0129 11:13:09.838475 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.838519 kubelet[2768]: W0129 11:13:09.838487 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.838519 kubelet[2768]: E0129 11:13:09.838495 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.838627 kubelet[2768]: E0129 11:13:09.838616 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.838627 kubelet[2768]: W0129 11:13:09.838626 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.838671 kubelet[2768]: E0129 11:13:09.838633 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.838761 kubelet[2768]: E0129 11:13:09.838751 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.838785 kubelet[2768]: W0129 11:13:09.838760 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.838785 kubelet[2768]: E0129 11:13:09.838768 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.838890 kubelet[2768]: E0129 11:13:09.838881 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.838911 kubelet[2768]: W0129 11:13:09.838890 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.838911 kubelet[2768]: E0129 11:13:09.838897 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.862287 kubelet[2768]: E0129 11:13:09.862262 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.862287 kubelet[2768]: W0129 11:13:09.862281 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.862371 kubelet[2768]: E0129 11:13:09.862293 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.862371 kubelet[2768]: I0129 11:13:09.862318 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e5725830-f2eb-461f-bca5-bd9a3c65abd6-socket-dir\") pod \"csi-node-driver-whhjq\" (UID: \"e5725830-f2eb-461f-bca5-bd9a3c65abd6\") " pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:09.862555 kubelet[2768]: E0129 11:13:09.862530 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.862555 kubelet[2768]: W0129 11:13:09.862544 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.862610 kubelet[2768]: E0129 11:13:09.862560 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.862610 kubelet[2768]: I0129 11:13:09.862578 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5725830-f2eb-461f-bca5-bd9a3c65abd6-kubelet-dir\") pod \"csi-node-driver-whhjq\" (UID: \"e5725830-f2eb-461f-bca5-bd9a3c65abd6\") " pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:09.862787 kubelet[2768]: E0129 11:13:09.862765 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.862787 kubelet[2768]: W0129 11:13:09.862778 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.862834 kubelet[2768]: E0129 11:13:09.862792 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.862834 kubelet[2768]: I0129 11:13:09.862807 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e5725830-f2eb-461f-bca5-bd9a3c65abd6-registration-dir\") pod \"csi-node-driver-whhjq\" (UID: \"e5725830-f2eb-461f-bca5-bd9a3c65abd6\") " pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:09.863001 kubelet[2768]: E0129 11:13:09.862982 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.863001 kubelet[2768]: W0129 11:13:09.862995 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.863047 kubelet[2768]: E0129 11:13:09.863012 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.863047 kubelet[2768]: I0129 11:13:09.863026 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k4tg\" (UniqueName: \"kubernetes.io/projected/e5725830-f2eb-461f-bca5-bd9a3c65abd6-kube-api-access-6k4tg\") pod \"csi-node-driver-whhjq\" (UID: \"e5725830-f2eb-461f-bca5-bd9a3c65abd6\") " pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:09.863252 kubelet[2768]: E0129 11:13:09.863216 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.863252 kubelet[2768]: W0129 11:13:09.863237 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.863309 kubelet[2768]: E0129 11:13:09.863256 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.863309 kubelet[2768]: I0129 11:13:09.863272 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e5725830-f2eb-461f-bca5-bd9a3c65abd6-varrun\") pod \"csi-node-driver-whhjq\" (UID: \"e5725830-f2eb-461f-bca5-bd9a3c65abd6\") " pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:09.863502 kubelet[2768]: E0129 11:13:09.863488 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.863502 kubelet[2768]: W0129 11:13:09.863501 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.863552 kubelet[2768]: E0129 11:13:09.863514 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.863667 kubelet[2768]: E0129 11:13:09.863655 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.863697 kubelet[2768]: W0129 11:13:09.863666 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.863717 kubelet[2768]: E0129 11:13:09.863690 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.863878 kubelet[2768]: E0129 11:13:09.863864 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.863878 kubelet[2768]: W0129 11:13:09.863875 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.863941 kubelet[2768]: E0129 11:13:09.863894 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.864080 kubelet[2768]: E0129 11:13:09.864068 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.864080 kubelet[2768]: W0129 11:13:09.864078 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.864130 kubelet[2768]: E0129 11:13:09.864098 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.864251 kubelet[2768]: E0129 11:13:09.864239 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.864251 kubelet[2768]: W0129 11:13:09.864250 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.864301 kubelet[2768]: E0129 11:13:09.864269 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.864440 kubelet[2768]: E0129 11:13:09.864428 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.864440 kubelet[2768]: W0129 11:13:09.864438 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.864496 kubelet[2768]: E0129 11:13:09.864457 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.864581 kubelet[2768]: E0129 11:13:09.864571 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.864581 kubelet[2768]: W0129 11:13:09.864580 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.864622 kubelet[2768]: E0129 11:13:09.864589 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.864760 kubelet[2768]: E0129 11:13:09.864748 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.864760 kubelet[2768]: W0129 11:13:09.864757 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.864821 kubelet[2768]: E0129 11:13:09.864764 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.864913 kubelet[2768]: E0129 11:13:09.864902 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.864938 kubelet[2768]: W0129 11:13:09.864920 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.864938 kubelet[2768]: E0129 11:13:09.864928 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.865072 kubelet[2768]: E0129 11:13:09.865063 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.865097 kubelet[2768]: W0129 11:13:09.865072 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.865097 kubelet[2768]: E0129 11:13:09.865081 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.910362 kubelet[2768]: E0129 11:13:09.910330 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:09.912125 containerd[1547]: time="2025-01-29T11:13:09.912084826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gmkgp,Uid:7b9182d0-5181-480d-827e-dc9343f89ad5,Namespace:calico-system,Attempt:0,}" Jan 29 11:13:09.931189 containerd[1547]: time="2025-01-29T11:13:09.931117874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:09.931189 containerd[1547]: time="2025-01-29T11:13:09.931178154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:09.931324 containerd[1547]: time="2025-01-29T11:13:09.931198114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:09.931324 containerd[1547]: time="2025-01-29T11:13:09.931295514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:09.962632 containerd[1547]: time="2025-01-29T11:13:09.962587873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gmkgp,Uid:7b9182d0-5181-480d-827e-dc9343f89ad5,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7e6de1ad6c0ef121d22838206840dc6bd20f6030ce23e9805b7474d5702361b\"" Jan 29 11:13:09.963281 kubelet[2768]: E0129 11:13:09.963257 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:09.963712 kubelet[2768]: E0129 11:13:09.963694 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.963712 kubelet[2768]: W0129 11:13:09.963709 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.963792 kubelet[2768]: E0129 11:13:09.963732 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.963947 kubelet[2768]: E0129 11:13:09.963932 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.963947 kubelet[2768]: W0129 11:13:09.963944 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.964000 kubelet[2768]: E0129 11:13:09.963952 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.964132 kubelet[2768]: E0129 11:13:09.964119 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.964132 kubelet[2768]: W0129 11:13:09.964129 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.964194 kubelet[2768]: E0129 11:13:09.964138 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.964295 kubelet[2768]: E0129 11:13:09.964282 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.964295 kubelet[2768]: W0129 11:13:09.964293 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.964345 kubelet[2768]: E0129 11:13:09.964301 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.964471 kubelet[2768]: E0129 11:13:09.964461 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.964508 kubelet[2768]: W0129 11:13:09.964471 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.964508 kubelet[2768]: E0129 11:13:09.964479 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.964748 kubelet[2768]: E0129 11:13:09.964732 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.964885 kubelet[2768]: W0129 11:13:09.964832 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.964885 kubelet[2768]: E0129 11:13:09.964847 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.965435 kubelet[2768]: E0129 11:13:09.965090 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.965435 kubelet[2768]: W0129 11:13:09.965102 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.965435 kubelet[2768]: E0129 11:13:09.965116 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.965435 kubelet[2768]: E0129 11:13:09.965319 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.965435 kubelet[2768]: W0129 11:13:09.965327 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.965435 kubelet[2768]: E0129 11:13:09.965340 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.965652 kubelet[2768]: E0129 11:13:09.965516 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.965652 kubelet[2768]: W0129 11:13:09.965530 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.965652 kubelet[2768]: E0129 11:13:09.965544 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.965721 kubelet[2768]: E0129 11:13:09.965683 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.965721 kubelet[2768]: W0129 11:13:09.965700 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.965721 kubelet[2768]: E0129 11:13:09.965713 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.966179 kubelet[2768]: E0129 11:13:09.965863 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.966179 kubelet[2768]: W0129 11:13:09.965872 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.966179 kubelet[2768]: E0129 11:13:09.965885 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.966179 kubelet[2768]: E0129 11:13:09.966027 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.966179 kubelet[2768]: W0129 11:13:09.966033 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.966179 kubelet[2768]: E0129 11:13:09.966056 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.966179 kubelet[2768]: E0129 11:13:09.966182 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.966179 kubelet[2768]: W0129 11:13:09.966189 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.966394 kubelet[2768]: E0129 11:13:09.966209 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.966394 kubelet[2768]: E0129 11:13:09.966348 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.966394 kubelet[2768]: W0129 11:13:09.966354 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.966394 kubelet[2768]: E0129 11:13:09.966390 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.966562 kubelet[2768]: E0129 11:13:09.966546 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.966562 kubelet[2768]: W0129 11:13:09.966556 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.966613 kubelet[2768]: E0129 11:13:09.966585 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.966722 kubelet[2768]: E0129 11:13:09.966712 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.966722 kubelet[2768]: W0129 11:13:09.966722 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.966768 kubelet[2768]: E0129 11:13:09.966733 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.966886 kubelet[2768]: E0129 11:13:09.966867 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.966911 kubelet[2768]: W0129 11:13:09.966886 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.966911 kubelet[2768]: E0129 11:13:09.966898 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.967101 kubelet[2768]: E0129 11:13:09.967075 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.967101 kubelet[2768]: W0129 11:13:09.967083 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.967101 kubelet[2768]: E0129 11:13:09.967095 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.967813 kubelet[2768]: E0129 11:13:09.967798 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.967813 kubelet[2768]: W0129 11:13:09.967813 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.968019 kubelet[2768]: E0129 11:13:09.967994 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.968050 kubelet[2768]: E0129 11:13:09.967999 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.968050 kubelet[2768]: W0129 11:13:09.968044 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.968092 kubelet[2768]: E0129 11:13:09.968069 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.968952 kubelet[2768]: E0129 11:13:09.968934 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.969010 kubelet[2768]: W0129 11:13:09.968955 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.969010 kubelet[2768]: E0129 11:13:09.968973 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.969150 kubelet[2768]: E0129 11:13:09.969138 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.969150 kubelet[2768]: W0129 11:13:09.969149 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.969210 kubelet[2768]: E0129 11:13:09.969162 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.969332 kubelet[2768]: E0129 11:13:09.969321 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.969332 kubelet[2768]: W0129 11:13:09.969331 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.969391 kubelet[2768]: E0129 11:13:09.969375 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.969510 kubelet[2768]: E0129 11:13:09.969499 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.969510 kubelet[2768]: W0129 11:13:09.969509 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.969568 kubelet[2768]: E0129 11:13:09.969518 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.969768 kubelet[2768]: E0129 11:13:09.969753 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.969768 kubelet[2768]: W0129 11:13:09.969767 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.969866 kubelet[2768]: E0129 11:13:09.969777 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:09.979853 kubelet[2768]: E0129 11:13:09.979678 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:09.979853 kubelet[2768]: W0129 11:13:09.979696 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:09.979853 kubelet[2768]: E0129 11:13:09.979708 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:10.643096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3836849365.mount: Deactivated successfully. Jan 29 11:13:10.911154 containerd[1547]: time="2025-01-29T11:13:10.911019056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 29 11:13:10.911568 containerd[1547]: time="2025-01-29T11:13:10.911489297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:10.912736 containerd[1547]: time="2025-01-29T11:13:10.912697620Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:10.913295 containerd[1547]: time="2025-01-29T11:13:10.913260182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:10.914712 containerd[1547]: time="2025-01-29T11:13:10.914675345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.106722062s" Jan 29 11:13:10.914712 containerd[1547]: time="2025-01-29T11:13:10.914706185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 11:13:10.916526 containerd[1547]: time="2025-01-29T11:13:10.916498029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:13:10.927643 containerd[1547]: time="2025-01-29T11:13:10.927447856Z" level=info msg="CreateContainer within sandbox \"6444556142800573d127fbad77f2c46873e20401e28c94adc9a7113500a76478\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:13:10.936150 containerd[1547]: time="2025-01-29T11:13:10.936059517Z" level=info msg="CreateContainer within sandbox \"6444556142800573d127fbad77f2c46873e20401e28c94adc9a7113500a76478\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"21d07228bdeef38d932c1f020901ea1364e672c28ec99c815ccef621625650d7\"" Jan 29 11:13:10.937741 containerd[1547]: time="2025-01-29T11:13:10.937683401Z" level=info msg="StartContainer for \"21d07228bdeef38d932c1f020901ea1364e672c28ec99c815ccef621625650d7\"" Jan 29 11:13:10.994831 containerd[1547]: time="2025-01-29T11:13:10.994782539Z" level=info msg="StartContainer for \"21d07228bdeef38d932c1f020901ea1364e672c28ec99c815ccef621625650d7\" returns successfully" Jan 29 11:13:11.500688 kubelet[2768]: E0129 11:13:11.500649 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whhjq" podUID="e5725830-f2eb-461f-bca5-bd9a3c65abd6" Jan 29 11:13:11.588441 kubelet[2768]: E0129 11:13:11.588392 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:11.603491 kubelet[2768]: I0129 11:13:11.603168 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c8d76d456-nd48r" podStartSLOduration=1.495516251 podStartE2EDuration="2.603154796s" podCreationTimestamp="2025-01-29 11:13:09 +0000 UTC" firstStartedPulling="2025-01-29 11:13:09.807726602 +0000 UTC m=+25.385107617" lastFinishedPulling="2025-01-29 11:13:10.915365187 +0000 UTC m=+26.492746162" observedRunningTime="2025-01-29 11:13:11.602909956 +0000 UTC m=+27.180290971" watchObservedRunningTime="2025-01-29 11:13:11.603154796 +0000 UTC m=+27.180535811" Jan 29 11:13:11.650375 kubelet[2768]: E0129 11:13:11.650346 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.650375 kubelet[2768]: W0129 11:13:11.650369 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.650375 kubelet[2768]: E0129 11:13:11.650388 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.650850 kubelet[2768]: E0129 11:13:11.650561 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.650850 kubelet[2768]: W0129 11:13:11.650570 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.650850 kubelet[2768]: E0129 11:13:11.650579 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.650850 kubelet[2768]: E0129 11:13:11.650835 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.650850 kubelet[2768]: W0129 11:13:11.650843 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.650850 kubelet[2768]: E0129 11:13:11.650851 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.651169 kubelet[2768]: E0129 11:13:11.650994 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.651169 kubelet[2768]: W0129 11:13:11.651002 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.651169 kubelet[2768]: E0129 11:13:11.651009 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.651169 kubelet[2768]: E0129 11:13:11.651155 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.651169 kubelet[2768]: W0129 11:13:11.651163 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.651169 kubelet[2768]: E0129 11:13:11.651170 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.651451 kubelet[2768]: E0129 11:13:11.651303 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.651451 kubelet[2768]: W0129 11:13:11.651311 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.651451 kubelet[2768]: E0129 11:13:11.651318 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.651696 kubelet[2768]: E0129 11:13:11.651508 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.651696 kubelet[2768]: W0129 11:13:11.651515 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.651696 kubelet[2768]: E0129 11:13:11.651522 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.651696 kubelet[2768]: E0129 11:13:11.651672 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.651696 kubelet[2768]: W0129 11:13:11.651678 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.651696 kubelet[2768]: E0129 11:13:11.651685 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.651995 kubelet[2768]: E0129 11:13:11.651834 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.651995 kubelet[2768]: W0129 11:13:11.651841 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.651995 kubelet[2768]: E0129 11:13:11.651848 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.651995 kubelet[2768]: E0129 11:13:11.651980 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.651995 kubelet[2768]: W0129 11:13:11.651988 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.651995 kubelet[2768]: E0129 11:13:11.651996 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.652220 kubelet[2768]: E0129 11:13:11.652136 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.652220 kubelet[2768]: W0129 11:13:11.652143 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.652220 kubelet[2768]: E0129 11:13:11.652150 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.652377 kubelet[2768]: E0129 11:13:11.652282 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.652377 kubelet[2768]: W0129 11:13:11.652288 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.652377 kubelet[2768]: E0129 11:13:11.652298 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.652474 kubelet[2768]: E0129 11:13:11.652438 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.652474 kubelet[2768]: W0129 11:13:11.652446 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.652474 kubelet[2768]: E0129 11:13:11.652453 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.652635 kubelet[2768]: E0129 11:13:11.652577 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.652635 kubelet[2768]: W0129 11:13:11.652584 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.652635 kubelet[2768]: E0129 11:13:11.652590 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.652739 kubelet[2768]: E0129 11:13:11.652728 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.652739 kubelet[2768]: W0129 11:13:11.652735 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.652787 kubelet[2768]: E0129 11:13:11.652743 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.675153 kubelet[2768]: E0129 11:13:11.675038 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.675153 kubelet[2768]: W0129 11:13:11.675054 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.675153 kubelet[2768]: E0129 11:13:11.675067 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.675297 kubelet[2768]: E0129 11:13:11.675282 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.675297 kubelet[2768]: W0129 11:13:11.675292 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.675340 kubelet[2768]: E0129 11:13:11.675302 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.675685 kubelet[2768]: E0129 11:13:11.675577 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.675685 kubelet[2768]: W0129 11:13:11.675592 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.675685 kubelet[2768]: E0129 11:13:11.675608 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.675947 kubelet[2768]: E0129 11:13:11.675849 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.675947 kubelet[2768]: W0129 11:13:11.675861 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.675947 kubelet[2768]: E0129 11:13:11.675878 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.676271 kubelet[2768]: E0129 11:13:11.676140 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.676271 kubelet[2768]: W0129 11:13:11.676152 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.676271 kubelet[2768]: E0129 11:13:11.676163 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.676455 kubelet[2768]: E0129 11:13:11.676441 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.676531 kubelet[2768]: W0129 11:13:11.676519 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.676594 kubelet[2768]: E0129 11:13:11.676585 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.676832 kubelet[2768]: E0129 11:13:11.676817 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.676832 kubelet[2768]: W0129 11:13:11.676830 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.676904 kubelet[2768]: E0129 11:13:11.676841 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.677349 kubelet[2768]: E0129 11:13:11.677332 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.677349 kubelet[2768]: W0129 11:13:11.677346 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.677459 kubelet[2768]: E0129 11:13:11.677358 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.677569 kubelet[2768]: E0129 11:13:11.677555 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.677606 kubelet[2768]: W0129 11:13:11.677571 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.677678 kubelet[2768]: E0129 11:13:11.677646 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.677718 kubelet[2768]: E0129 11:13:11.677709 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.677718 kubelet[2768]: W0129 11:13:11.677717 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.677797 kubelet[2768]: E0129 11:13:11.677773 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.677965 kubelet[2768]: E0129 11:13:11.677952 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.678000 kubelet[2768]: W0129 11:13:11.677965 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.678000 kubelet[2768]: E0129 11:13:11.677980 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.678201 kubelet[2768]: E0129 11:13:11.678188 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.678244 kubelet[2768]: W0129 11:13:11.678226 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.678277 kubelet[2768]: E0129 11:13:11.678251 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.678677 kubelet[2768]: E0129 11:13:11.678572 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.678677 kubelet[2768]: W0129 11:13:11.678586 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.678677 kubelet[2768]: E0129 11:13:11.678603 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.678856 kubelet[2768]: E0129 11:13:11.678845 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.678911 kubelet[2768]: W0129 11:13:11.678900 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.679007 kubelet[2768]: E0129 11:13:11.678967 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.679273 kubelet[2768]: E0129 11:13:11.679180 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.679273 kubelet[2768]: W0129 11:13:11.679192 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.679273 kubelet[2768]: E0129 11:13:11.679207 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.679580 kubelet[2768]: E0129 11:13:11.679567 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.679654 kubelet[2768]: W0129 11:13:11.679642 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.679824 kubelet[2768]: E0129 11:13:11.679717 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.679978 kubelet[2768]: E0129 11:13:11.679962 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.679978 kubelet[2768]: W0129 11:13:11.679976 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.680037 kubelet[2768]: E0129 11:13:11.679986 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.680420 kubelet[2768]: E0129 11:13:11.680388 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:13:11.680511 kubelet[2768]: W0129 11:13:11.680496 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:13:11.680581 kubelet[2768]: E0129 11:13:11.680554 2768 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:13:11.682356 containerd[1547]: time="2025-01-29T11:13:11.682303221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:11.682974 containerd[1547]: time="2025-01-29T11:13:11.682924222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 29 11:13:11.683784 containerd[1547]: time="2025-01-29T11:13:11.683752984Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:11.685782 containerd[1547]: time="2025-01-29T11:13:11.685749469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:11.687121 containerd[1547]: time="2025-01-29T11:13:11.687082392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 770.554802ms" Jan 29 11:13:11.687121 containerd[1547]: time="2025-01-29T11:13:11.687118792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 11:13:11.689002 containerd[1547]: time="2025-01-29T11:13:11.688933836Z" level=info msg="CreateContainer within sandbox \"a7e6de1ad6c0ef121d22838206840dc6bd20f6030ce23e9805b7474d5702361b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:13:11.697539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2466740869.mount: Deactivated successfully. Jan 29 11:13:11.710648 containerd[1547]: time="2025-01-29T11:13:11.710605607Z" level=info msg="CreateContainer within sandbox \"a7e6de1ad6c0ef121d22838206840dc6bd20f6030ce23e9805b7474d5702361b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"df25d5a15d0fcff84c1d51f7c077f671e3955b9067189d29673c479832e4e56a\"" Jan 29 11:13:11.710984 containerd[1547]: time="2025-01-29T11:13:11.710963767Z" level=info msg="StartContainer for \"df25d5a15d0fcff84c1d51f7c077f671e3955b9067189d29673c479832e4e56a\"" Jan 29 11:13:11.761887 containerd[1547]: time="2025-01-29T11:13:11.761780566Z" level=info msg="StartContainer for \"df25d5a15d0fcff84c1d51f7c077f671e3955b9067189d29673c479832e4e56a\" returns successfully" Jan 29 11:13:11.922724 containerd[1547]: time="2025-01-29T11:13:11.904704019Z" level=info msg="shim disconnected" id=df25d5a15d0fcff84c1d51f7c077f671e3955b9067189d29673c479832e4e56a namespace=k8s.io Jan 29 11:13:11.922724 containerd[1547]: time="2025-01-29T11:13:11.922725901Z" level=warning msg="cleaning up after shim disconnected" id=df25d5a15d0fcff84c1d51f7c077f671e3955b9067189d29673c479832e4e56a namespace=k8s.io Jan 29 11:13:11.923137 containerd[1547]: time="2025-01-29T11:13:11.922737981Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:13:12.579904 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df25d5a15d0fcff84c1d51f7c077f671e3955b9067189d29673c479832e4e56a-rootfs.mount: Deactivated successfully. Jan 29 11:13:12.591203 kubelet[2768]: E0129 11:13:12.591050 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:12.592599 containerd[1547]: time="2025-01-29T11:13:12.592569287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:13:12.596283 kubelet[2768]: I0129 11:13:12.596028 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:13:12.598565 kubelet[2768]: E0129 11:13:12.598544 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:12.862639 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:34802.service - OpenSSH per-connection server daemon (10.0.0.1:34802). Jan 29 11:13:12.902769 sshd[3509]: Accepted publickey for core from 10.0.0.1 port 34802 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:12.903251 sshd-session[3509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:12.907985 systemd-logind[1528]: New session 8 of user core. Jan 29 11:13:12.926626 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:13:13.041718 sshd[3512]: Connection closed by 10.0.0.1 port 34802 Jan 29 11:13:13.042021 sshd-session[3509]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:13.044392 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:34802.service: Deactivated successfully. Jan 29 11:13:13.046796 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:13:13.046974 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:13:13.048046 systemd-logind[1528]: Removed session 8. Jan 29 11:13:13.500770 kubelet[2768]: E0129 11:13:13.500687 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-whhjq" podUID="e5725830-f2eb-461f-bca5-bd9a3c65abd6" Jan 29 11:13:14.476845 containerd[1547]: time="2025-01-29T11:13:14.476796705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:14.477824 containerd[1547]: time="2025-01-29T11:13:14.477559587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 11:13:14.478493 containerd[1547]: time="2025-01-29T11:13:14.478458949Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:14.481422 containerd[1547]: time="2025-01-29T11:13:14.481325955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:14.482744 containerd[1547]: time="2025-01-29T11:13:14.482519237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 1.88991039s" Jan 29 11:13:14.482744 containerd[1547]: time="2025-01-29T11:13:14.482550397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 11:13:14.484337 containerd[1547]: time="2025-01-29T11:13:14.484309481Z" level=info msg="CreateContainer within sandbox \"a7e6de1ad6c0ef121d22838206840dc6bd20f6030ce23e9805b7474d5702361b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:13:14.492863 containerd[1547]: time="2025-01-29T11:13:14.492764658Z" level=info msg="CreateContainer within sandbox \"a7e6de1ad6c0ef121d22838206840dc6bd20f6030ce23e9805b7474d5702361b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f1eec1b966e522d66d7b73b76d7c7609b40ed7b529c9f0c877ae4b16dd0f9b86\"" Jan 29 11:13:14.493487 containerd[1547]: time="2025-01-29T11:13:14.493451820Z" level=info msg="StartContainer for \"f1eec1b966e522d66d7b73b76d7c7609b40ed7b529c9f0c877ae4b16dd0f9b86\"" Jan 29 11:13:14.546982 containerd[1547]: time="2025-01-29T11:13:14.545550968Z" level=info msg="StartContainer for \"f1eec1b966e522d66d7b73b76d7c7609b40ed7b529c9f0c877ae4b16dd0f9b86\" returns successfully" Jan 29 11:13:14.598005 kubelet[2768]: E0129 11:13:14.597963 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:15.193791 containerd[1547]: time="2025-01-29T11:13:15.193738380Z" level=info msg="shim disconnected" id=f1eec1b966e522d66d7b73b76d7c7609b40ed7b529c9f0c877ae4b16dd0f9b86 namespace=k8s.io Jan 29 11:13:15.193791 containerd[1547]: time="2025-01-29T11:13:15.193789420Z" level=warning msg="cleaning up after shim disconnected" id=f1eec1b966e522d66d7b73b76d7c7609b40ed7b529c9f0c877ae4b16dd0f9b86 namespace=k8s.io Jan 29 11:13:15.193982 containerd[1547]: time="2025-01-29T11:13:15.193798500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:13:15.254423 kubelet[2768]: I0129 11:13:15.254384 2768 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:13:15.274163 kubelet[2768]: I0129 11:13:15.273652 2768 topology_manager.go:215] "Topology Admit Handler" podUID="a0701364-f7fc-4c5f-95e0-131b1446a6f8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-w884t" Jan 29 11:13:15.281088 kubelet[2768]: I0129 11:13:15.280954 2768 topology_manager.go:215] "Topology Admit Handler" podUID="189a793a-2e8c-4371-8403-accce88972f2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qzp2w" Jan 29 11:13:15.281947 kubelet[2768]: I0129 11:13:15.281534 2768 topology_manager.go:215] "Topology Admit Handler" podUID="9ff24cea-d161-45f7-b61c-4aeb38cb42ab" podNamespace="calico-system" podName="calico-kube-controllers-ff5cc644f-g54rc" Jan 29 11:13:15.283921 kubelet[2768]: I0129 11:13:15.283889 2768 topology_manager.go:215] "Topology Admit Handler" podUID="932cd32e-4633-4030-9143-1db1b953d083" podNamespace="calico-apiserver" podName="calico-apiserver-c6f7d4488-sk7nj" Jan 29 11:13:15.284432 kubelet[2768]: I0129 11:13:15.284392 2768 topology_manager.go:215] "Topology Admit Handler" podUID="28a382ce-ce53-43b1-8797-59a017f4f410" podNamespace="calico-apiserver" podName="calico-apiserver-c6f7d4488-cn6bv" Jan 29 11:13:15.307348 kubelet[2768]: I0129 11:13:15.307298 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/932cd32e-4633-4030-9143-1db1b953d083-calico-apiserver-certs\") pod \"calico-apiserver-c6f7d4488-sk7nj\" (UID: \"932cd32e-4633-4030-9143-1db1b953d083\") " pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" Jan 29 11:13:15.307348 kubelet[2768]: I0129 11:13:15.307343 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn54c\" (UniqueName: \"kubernetes.io/projected/a0701364-f7fc-4c5f-95e0-131b1446a6f8-kube-api-access-sn54c\") pod \"coredns-7db6d8ff4d-w884t\" (UID: \"a0701364-f7fc-4c5f-95e0-131b1446a6f8\") " pod="kube-system/coredns-7db6d8ff4d-w884t" Jan 29 11:13:15.307502 kubelet[2768]: I0129 11:13:15.307363 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9mzc\" (UniqueName: \"kubernetes.io/projected/189a793a-2e8c-4371-8403-accce88972f2-kube-api-access-b9mzc\") pod \"coredns-7db6d8ff4d-qzp2w\" (UID: \"189a793a-2e8c-4371-8403-accce88972f2\") " pod="kube-system/coredns-7db6d8ff4d-qzp2w" Jan 29 11:13:15.307502 kubelet[2768]: I0129 11:13:15.307418 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0701364-f7fc-4c5f-95e0-131b1446a6f8-config-volume\") pod \"coredns-7db6d8ff4d-w884t\" (UID: \"a0701364-f7fc-4c5f-95e0-131b1446a6f8\") " pod="kube-system/coredns-7db6d8ff4d-w884t" Jan 29 11:13:15.307502 kubelet[2768]: I0129 11:13:15.307456 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5fqv\" (UniqueName: \"kubernetes.io/projected/932cd32e-4633-4030-9143-1db1b953d083-kube-api-access-b5fqv\") pod \"calico-apiserver-c6f7d4488-sk7nj\" (UID: \"932cd32e-4633-4030-9143-1db1b953d083\") " pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" Jan 29 11:13:15.307502 kubelet[2768]: I0129 11:13:15.307476 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ff24cea-d161-45f7-b61c-4aeb38cb42ab-tigera-ca-bundle\") pod \"calico-kube-controllers-ff5cc644f-g54rc\" (UID: \"9ff24cea-d161-45f7-b61c-4aeb38cb42ab\") " pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" Jan 29 11:13:15.307502 kubelet[2768]: I0129 11:13:15.307491 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7hf6\" (UniqueName: \"kubernetes.io/projected/9ff24cea-d161-45f7-b61c-4aeb38cb42ab-kube-api-access-f7hf6\") pod \"calico-kube-controllers-ff5cc644f-g54rc\" (UID: \"9ff24cea-d161-45f7-b61c-4aeb38cb42ab\") " pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" Jan 29 11:13:15.307612 kubelet[2768]: I0129 11:13:15.307509 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28a382ce-ce53-43b1-8797-59a017f4f410-calico-apiserver-certs\") pod \"calico-apiserver-c6f7d4488-cn6bv\" (UID: \"28a382ce-ce53-43b1-8797-59a017f4f410\") " pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" Jan 29 11:13:15.307612 kubelet[2768]: I0129 11:13:15.307524 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg8wr\" (UniqueName: \"kubernetes.io/projected/28a382ce-ce53-43b1-8797-59a017f4f410-kube-api-access-fg8wr\") pod \"calico-apiserver-c6f7d4488-cn6bv\" (UID: \"28a382ce-ce53-43b1-8797-59a017f4f410\") " pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" Jan 29 11:13:15.307612 kubelet[2768]: I0129 11:13:15.307541 2768 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/189a793a-2e8c-4371-8403-accce88972f2-config-volume\") pod \"coredns-7db6d8ff4d-qzp2w\" (UID: \"189a793a-2e8c-4371-8403-accce88972f2\") " pod="kube-system/coredns-7db6d8ff4d-qzp2w" Jan 29 11:13:15.496766 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1eec1b966e522d66d7b73b76d7c7609b40ed7b529c9f0c877ae4b16dd0f9b86-rootfs.mount: Deactivated successfully. Jan 29 11:13:15.503158 containerd[1547]: time="2025-01-29T11:13:15.503105760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whhjq,Uid:e5725830-f2eb-461f-bca5-bd9a3c65abd6,Namespace:calico-system,Attempt:0,}" Jan 29 11:13:15.597700 kubelet[2768]: E0129 11:13:15.597667 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:15.598627 containerd[1547]: time="2025-01-29T11:13:15.598591631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w884t,Uid:a0701364-f7fc-4c5f-95e0-131b1446a6f8,Namespace:kube-system,Attempt:0,}" Jan 29 11:13:15.602770 kubelet[2768]: E0129 11:13:15.602742 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:15.603988 kubelet[2768]: E0129 11:13:15.602975 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:15.604081 containerd[1547]: time="2025-01-29T11:13:15.604038282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5cc644f-g54rc,Uid:9ff24cea-d161-45f7-b61c-4aeb38cb42ab,Namespace:calico-system,Attempt:0,}" Jan 29 11:13:15.604587 containerd[1547]: time="2025-01-29T11:13:15.604554963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzp2w,Uid:189a793a-2e8c-4371-8403-accce88972f2,Namespace:kube-system,Attempt:0,}" Jan 29 11:13:15.605026 containerd[1547]: time="2025-01-29T11:13:15.604981764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:13:15.611685 containerd[1547]: time="2025-01-29T11:13:15.611645017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-sk7nj,Uid:932cd32e-4633-4030-9143-1db1b953d083,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:13:15.614015 containerd[1547]: time="2025-01-29T11:13:15.612162978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-cn6bv,Uid:28a382ce-ce53-43b1-8797-59a017f4f410,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:13:15.723694 containerd[1547]: time="2025-01-29T11:13:15.723628921Z" level=error msg="Failed to destroy network for sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.728701 containerd[1547]: time="2025-01-29T11:13:15.728647451Z" level=error msg="Failed to destroy network for sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.729720 containerd[1547]: time="2025-01-29T11:13:15.729672853Z" level=error msg="encountered an error cleaning up failed sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.729786 containerd[1547]: time="2025-01-29T11:13:15.729745213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-cn6bv,Uid:28a382ce-ce53-43b1-8797-59a017f4f410,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.732547 kubelet[2768]: E0129 11:13:15.732310 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.732547 kubelet[2768]: E0129 11:13:15.732428 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" Jan 29 11:13:15.732791 kubelet[2768]: E0129 11:13:15.732756 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" Jan 29 11:13:15.732870 kubelet[2768]: E0129 11:13:15.732840 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6f7d4488-cn6bv_calico-apiserver(28a382ce-ce53-43b1-8797-59a017f4f410)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6f7d4488-cn6bv_calico-apiserver(28a382ce-ce53-43b1-8797-59a017f4f410)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" podUID="28a382ce-ce53-43b1-8797-59a017f4f410" Jan 29 11:13:15.735338 containerd[1547]: time="2025-01-29T11:13:15.735296585Z" level=error msg="encountered an error cleaning up failed sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.735491 containerd[1547]: time="2025-01-29T11:13:15.735467505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whhjq,Uid:e5725830-f2eb-461f-bca5-bd9a3c65abd6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.735619 containerd[1547]: time="2025-01-29T11:13:15.735490985Z" level=error msg="Failed to destroy network for sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.735784 kubelet[2768]: E0129 11:13:15.735746 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.735830 kubelet[2768]: E0129 11:13:15.735798 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:15.735830 kubelet[2768]: E0129 11:13:15.735817 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:15.735883 kubelet[2768]: E0129 11:13:15.735859 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-whhjq_calico-system(e5725830-f2eb-461f-bca5-bd9a3c65abd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-whhjq_calico-system(e5725830-f2eb-461f-bca5-bd9a3c65abd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-whhjq" podUID="e5725830-f2eb-461f-bca5-bd9a3c65abd6" Jan 29 11:13:15.736151 containerd[1547]: time="2025-01-29T11:13:15.736124586Z" level=error msg="encountered an error cleaning up failed sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.736267 containerd[1547]: time="2025-01-29T11:13:15.736231906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzp2w,Uid:189a793a-2e8c-4371-8403-accce88972f2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.736558 kubelet[2768]: E0129 11:13:15.736521 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.736627 kubelet[2768]: E0129 11:13:15.736563 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qzp2w" Jan 29 11:13:15.736627 kubelet[2768]: E0129 11:13:15.736582 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qzp2w" Jan 29 11:13:15.737025 kubelet[2768]: E0129 11:13:15.736660 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qzp2w_kube-system(189a793a-2e8c-4371-8403-accce88972f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qzp2w_kube-system(189a793a-2e8c-4371-8403-accce88972f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qzp2w" podUID="189a793a-2e8c-4371-8403-accce88972f2" Jan 29 11:13:15.737887 containerd[1547]: time="2025-01-29T11:13:15.737849470Z" level=error msg="Failed to destroy network for sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.738678 containerd[1547]: time="2025-01-29T11:13:15.738643751Z" level=error msg="encountered an error cleaning up failed sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.738737 containerd[1547]: time="2025-01-29T11:13:15.738699231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5cc644f-g54rc,Uid:9ff24cea-d161-45f7-b61c-4aeb38cb42ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.738941 kubelet[2768]: E0129 11:13:15.738911 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.738984 kubelet[2768]: E0129 11:13:15.738956 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" Jan 29 11:13:15.738984 kubelet[2768]: E0129 11:13:15.738975 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" Jan 29 11:13:15.739044 kubelet[2768]: E0129 11:13:15.739008 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ff5cc644f-g54rc_calico-system(9ff24cea-d161-45f7-b61c-4aeb38cb42ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ff5cc644f-g54rc_calico-system(9ff24cea-d161-45f7-b61c-4aeb38cb42ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" podUID="9ff24cea-d161-45f7-b61c-4aeb38cb42ab" Jan 29 11:13:15.739460 containerd[1547]: time="2025-01-29T11:13:15.739364393Z" level=error msg="Failed to destroy network for sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.739977 containerd[1547]: time="2025-01-29T11:13:15.739688073Z" level=error msg="encountered an error cleaning up failed sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.739977 containerd[1547]: time="2025-01-29T11:13:15.739729073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w884t,Uid:a0701364-f7fc-4c5f-95e0-131b1446a6f8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.740059 kubelet[2768]: E0129 11:13:15.739868 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.740059 kubelet[2768]: E0129 11:13:15.739903 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w884t" Jan 29 11:13:15.740059 kubelet[2768]: E0129 11:13:15.739919 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w884t" Jan 29 11:13:15.740132 kubelet[2768]: E0129 11:13:15.739945 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-w884t_kube-system(a0701364-f7fc-4c5f-95e0-131b1446a6f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-w884t_kube-system(a0701364-f7fc-4c5f-95e0-131b1446a6f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w884t" podUID="a0701364-f7fc-4c5f-95e0-131b1446a6f8" Jan 29 11:13:15.749328 containerd[1547]: time="2025-01-29T11:13:15.749154852Z" level=error msg="Failed to destroy network for sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.749988 containerd[1547]: time="2025-01-29T11:13:15.749572773Z" level=error msg="encountered an error cleaning up failed sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.749988 containerd[1547]: time="2025-01-29T11:13:15.749620493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-sk7nj,Uid:932cd32e-4633-4030-9143-1db1b953d083,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.750079 kubelet[2768]: E0129 11:13:15.749759 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:15.750079 kubelet[2768]: E0129 11:13:15.749790 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" Jan 29 11:13:15.750079 kubelet[2768]: E0129 11:13:15.749809 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" Jan 29 11:13:15.750193 kubelet[2768]: E0129 11:13:15.749836 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6f7d4488-sk7nj_calico-apiserver(932cd32e-4633-4030-9143-1db1b953d083)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6f7d4488-sk7nj_calico-apiserver(932cd32e-4633-4030-9143-1db1b953d083)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" podUID="932cd32e-4633-4030-9143-1db1b953d083" Jan 29 11:13:16.492343 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b-shm.mount: Deactivated successfully. Jan 29 11:13:16.492742 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83-shm.mount: Deactivated successfully. Jan 29 11:13:16.605813 kubelet[2768]: I0129 11:13:16.605438 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86" Jan 29 11:13:16.611164 containerd[1547]: time="2025-01-29T11:13:16.607747050Z" level=info msg="StopPodSandbox for \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\"" Jan 29 11:13:16.611164 containerd[1547]: time="2025-01-29T11:13:16.607930051Z" level=info msg="Ensure that sandbox 24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86 in task-service has been cleanup successfully" Jan 29 11:13:16.611164 containerd[1547]: time="2025-01-29T11:13:16.608484252Z" level=info msg="TearDown network for sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" successfully" Jan 29 11:13:16.611164 containerd[1547]: time="2025-01-29T11:13:16.608501092Z" level=info msg="StopPodSandbox for \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" returns successfully" Jan 29 11:13:16.611164 containerd[1547]: time="2025-01-29T11:13:16.610651696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5cc644f-g54rc,Uid:9ff24cea-d161-45f7-b61c-4aeb38cb42ab,Namespace:calico-system,Attempt:1,}" Jan 29 11:13:16.611873 kubelet[2768]: I0129 11:13:16.611047 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce" Jan 29 11:13:16.610842 systemd[1]: run-netns-cni\x2d0e840fb0\x2d3cec\x2d1127\x2da239\x2d6236fca93eff.mount: Deactivated successfully. Jan 29 11:13:16.612644 containerd[1547]: time="2025-01-29T11:13:16.611568698Z" level=info msg="StopPodSandbox for \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\"" Jan 29 11:13:16.612644 containerd[1547]: time="2025-01-29T11:13:16.612124939Z" level=info msg="Ensure that sandbox 124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce in task-service has been cleanup successfully" Jan 29 11:13:16.612731 kubelet[2768]: I0129 11:13:16.612258 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe" Jan 29 11:13:16.612792 containerd[1547]: time="2025-01-29T11:13:16.612761580Z" level=info msg="StopPodSandbox for \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\"" Jan 29 11:13:16.612930 containerd[1547]: time="2025-01-29T11:13:16.612905340Z" level=info msg="Ensure that sandbox 43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe in task-service has been cleanup successfully" Jan 29 11:13:16.618423 containerd[1547]: time="2025-01-29T11:13:16.617825830Z" level=info msg="TearDown network for sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" successfully" Jan 29 11:13:16.618423 containerd[1547]: time="2025-01-29T11:13:16.617868310Z" level=info msg="StopPodSandbox for \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" returns successfully" Jan 29 11:13:16.618356 systemd[1]: run-netns-cni\x2dede8b274\x2d5aae\x2d6aa7\x2d9ce3\x2de46255ec3dcc.mount: Deactivated successfully. Jan 29 11:13:16.618599 kubelet[2768]: I0129 11:13:16.618440 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b" Jan 29 11:13:16.622688 containerd[1547]: time="2025-01-29T11:13:16.619092792Z" level=info msg="StopPodSandbox for \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\"" Jan 29 11:13:16.622688 containerd[1547]: time="2025-01-29T11:13:16.619348753Z" level=info msg="Ensure that sandbox b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b in task-service has been cleanup successfully" Jan 29 11:13:16.622688 containerd[1547]: time="2025-01-29T11:13:16.619820674Z" level=info msg="TearDown network for sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" successfully" Jan 29 11:13:16.622688 containerd[1547]: time="2025-01-29T11:13:16.619840874Z" level=info msg="StopPodSandbox for \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" returns successfully" Jan 29 11:13:16.622688 containerd[1547]: time="2025-01-29T11:13:16.621552517Z" level=info msg="TearDown network for sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" successfully" Jan 29 11:13:16.622688 containerd[1547]: time="2025-01-29T11:13:16.621575157Z" level=info msg="StopPodSandbox for \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" returns successfully" Jan 29 11:13:16.622688 containerd[1547]: time="2025-01-29T11:13:16.622142038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w884t,Uid:a0701364-f7fc-4c5f-95e0-131b1446a6f8,Namespace:kube-system,Attempt:1,}" Jan 29 11:13:16.626898 kubelet[2768]: E0129 11:13:16.621882 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:16.620635 systemd[1]: run-netns-cni\x2d45e6e12b\x2dac97\x2d0c56\x2d547b\x2d7f4c4d65f881.mount: Deactivated successfully. Jan 29 11:13:16.627024 containerd[1547]: time="2025-01-29T11:13:16.622929800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-sk7nj,Uid:932cd32e-4633-4030-9143-1db1b953d083,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:13:16.627024 containerd[1547]: time="2025-01-29T11:13:16.623345720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-cn6bv,Uid:28a382ce-ce53-43b1-8797-59a017f4f410,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:13:16.629501 kubelet[2768]: I0129 11:13:16.627881 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83" Jan 29 11:13:16.628851 systemd[1]: run-netns-cni\x2da7504ce3\x2d6e3f\x2d01e4\x2d6dda\x2d4e82accf3d83.mount: Deactivated successfully. Jan 29 11:13:16.629648 containerd[1547]: time="2025-01-29T11:13:16.628426290Z" level=info msg="StopPodSandbox for \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\"" Jan 29 11:13:16.629648 containerd[1547]: time="2025-01-29T11:13:16.628596770Z" level=info msg="Ensure that sandbox 266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83 in task-service has been cleanup successfully" Jan 29 11:13:16.631680 containerd[1547]: time="2025-01-29T11:13:16.631646376Z" level=info msg="TearDown network for sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" successfully" Jan 29 11:13:16.631769 containerd[1547]: time="2025-01-29T11:13:16.631755497Z" level=info msg="StopPodSandbox for \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" returns successfully" Jan 29 11:13:16.631906 kubelet[2768]: I0129 11:13:16.631879 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f" Jan 29 11:13:16.632624 systemd[1]: run-netns-cni\x2d8cc54739\x2d41b7\x2d91d1\x2d379d\x2defca89d02fdd.mount: Deactivated successfully. Jan 29 11:13:16.633634 containerd[1547]: time="2025-01-29T11:13:16.633597300Z" level=info msg="StopPodSandbox for \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\"" Jan 29 11:13:16.634195 containerd[1547]: time="2025-01-29T11:13:16.633745900Z" level=info msg="Ensure that sandbox 4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f in task-service has been cleanup successfully" Jan 29 11:13:16.634195 containerd[1547]: time="2025-01-29T11:13:16.633993341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whhjq,Uid:e5725830-f2eb-461f-bca5-bd9a3c65abd6,Namespace:calico-system,Attempt:1,}" Jan 29 11:13:16.634288 containerd[1547]: time="2025-01-29T11:13:16.634261101Z" level=info msg="TearDown network for sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" successfully" Jan 29 11:13:16.634288 containerd[1547]: time="2025-01-29T11:13:16.634275301Z" level=info msg="StopPodSandbox for \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" returns successfully" Jan 29 11:13:16.634548 kubelet[2768]: E0129 11:13:16.634519 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:16.634782 containerd[1547]: time="2025-01-29T11:13:16.634752302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzp2w,Uid:189a793a-2e8c-4371-8403-accce88972f2,Namespace:kube-system,Attempt:1,}" Jan 29 11:13:16.876615 containerd[1547]: time="2025-01-29T11:13:16.876366690Z" level=error msg="Failed to destroy network for sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.877727 containerd[1547]: time="2025-01-29T11:13:16.877696772Z" level=error msg="encountered an error cleaning up failed sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.877788 containerd[1547]: time="2025-01-29T11:13:16.877760852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-sk7nj,Uid:932cd32e-4633-4030-9143-1db1b953d083,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.878327 kubelet[2768]: E0129 11:13:16.877987 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.878327 kubelet[2768]: E0129 11:13:16.878044 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" Jan 29 11:13:16.878327 kubelet[2768]: E0129 11:13:16.878065 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" Jan 29 11:13:16.878484 kubelet[2768]: E0129 11:13:16.878101 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6f7d4488-sk7nj_calico-apiserver(932cd32e-4633-4030-9143-1db1b953d083)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6f7d4488-sk7nj_calico-apiserver(932cd32e-4633-4030-9143-1db1b953d083)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" podUID="932cd32e-4633-4030-9143-1db1b953d083" Jan 29 11:13:16.895678 containerd[1547]: time="2025-01-29T11:13:16.895565567Z" level=error msg="Failed to destroy network for sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.895984 containerd[1547]: time="2025-01-29T11:13:16.895957448Z" level=error msg="encountered an error cleaning up failed sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.896099 containerd[1547]: time="2025-01-29T11:13:16.896079168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w884t,Uid:a0701364-f7fc-4c5f-95e0-131b1446a6f8,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.896445 kubelet[2768]: E0129 11:13:16.896321 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.896445 kubelet[2768]: E0129 11:13:16.896373 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w884t" Jan 29 11:13:16.897081 kubelet[2768]: E0129 11:13:16.896394 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w884t" Jan 29 11:13:16.897081 kubelet[2768]: E0129 11:13:16.896824 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-w884t_kube-system(a0701364-f7fc-4c5f-95e0-131b1446a6f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-w884t_kube-system(a0701364-f7fc-4c5f-95e0-131b1446a6f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w884t" podUID="a0701364-f7fc-4c5f-95e0-131b1446a6f8" Jan 29 11:13:16.926581 containerd[1547]: time="2025-01-29T11:13:16.926520547Z" level=error msg="Failed to destroy network for sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.934073 containerd[1547]: time="2025-01-29T11:13:16.931693677Z" level=error msg="encountered an error cleaning up failed sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.934197 containerd[1547]: time="2025-01-29T11:13:16.934117001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whhjq,Uid:e5725830-f2eb-461f-bca5-bd9a3c65abd6,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.935096 kubelet[2768]: E0129 11:13:16.934970 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.935096 kubelet[2768]: E0129 11:13:16.935063 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:16.935334 kubelet[2768]: E0129 11:13:16.935177 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:16.935334 kubelet[2768]: E0129 11:13:16.935230 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-whhjq_calico-system(e5725830-f2eb-461f-bca5-bd9a3c65abd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-whhjq_calico-system(e5725830-f2eb-461f-bca5-bd9a3c65abd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-whhjq" podUID="e5725830-f2eb-461f-bca5-bd9a3c65abd6" Jan 29 11:13:16.936007 containerd[1547]: time="2025-01-29T11:13:16.935652364Z" level=error msg="Failed to destroy network for sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.937339 containerd[1547]: time="2025-01-29T11:13:16.937301927Z" level=error msg="Failed to destroy network for sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.938341 containerd[1547]: time="2025-01-29T11:13:16.938291289Z" level=error msg="encountered an error cleaning up failed sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.938394 containerd[1547]: time="2025-01-29T11:13:16.938358450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzp2w,Uid:189a793a-2e8c-4371-8403-accce88972f2,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.939036 kubelet[2768]: E0129 11:13:16.939000 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.939249 kubelet[2768]: E0129 11:13:16.939054 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qzp2w" Jan 29 11:13:16.939249 kubelet[2768]: E0129 11:13:16.939079 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qzp2w" Jan 29 11:13:16.939249 kubelet[2768]: E0129 11:13:16.939111 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qzp2w_kube-system(189a793a-2e8c-4371-8403-accce88972f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qzp2w_kube-system(189a793a-2e8c-4371-8403-accce88972f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qzp2w" podUID="189a793a-2e8c-4371-8403-accce88972f2" Jan 29 11:13:16.939929 containerd[1547]: time="2025-01-29T11:13:16.939062171Z" level=error msg="encountered an error cleaning up failed sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.939929 containerd[1547]: time="2025-01-29T11:13:16.939107931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5cc644f-g54rc,Uid:9ff24cea-d161-45f7-b61c-4aeb38cb42ab,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.940081 kubelet[2768]: E0129 11:13:16.939260 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.940081 kubelet[2768]: E0129 11:13:16.939297 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" Jan 29 11:13:16.940081 kubelet[2768]: E0129 11:13:16.939316 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" Jan 29 11:13:16.940398 kubelet[2768]: E0129 11:13:16.939342 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ff5cc644f-g54rc_calico-system(9ff24cea-d161-45f7-b61c-4aeb38cb42ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ff5cc644f-g54rc_calico-system(9ff24cea-d161-45f7-b61c-4aeb38cb42ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" podUID="9ff24cea-d161-45f7-b61c-4aeb38cb42ab" Jan 29 11:13:16.951936 containerd[1547]: time="2025-01-29T11:13:16.951894276Z" level=error msg="Failed to destroy network for sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.952843 containerd[1547]: time="2025-01-29T11:13:16.952639757Z" level=error msg="encountered an error cleaning up failed sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.952843 containerd[1547]: time="2025-01-29T11:13:16.952704037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-cn6bv,Uid:28a382ce-ce53-43b1-8797-59a017f4f410,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.952957 kubelet[2768]: E0129 11:13:16.952911 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:16.953027 kubelet[2768]: E0129 11:13:16.952965 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" Jan 29 11:13:16.953027 kubelet[2768]: E0129 11:13:16.952983 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" Jan 29 11:13:16.953092 kubelet[2768]: E0129 11:13:16.953016 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6f7d4488-cn6bv_calico-apiserver(28a382ce-ce53-43b1-8797-59a017f4f410)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6f7d4488-cn6bv_calico-apiserver(28a382ce-ce53-43b1-8797-59a017f4f410)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" podUID="28a382ce-ce53-43b1-8797-59a017f4f410" Jan 29 11:13:17.493274 systemd[1]: run-netns-cni\x2da852fc2d\x2d2e40\x2dc2a4\x2dec94\x2d8c8cc2ada6d8.mount: Deactivated successfully. Jan 29 11:13:17.637424 kubelet[2768]: I0129 11:13:17.634918 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f" Jan 29 11:13:17.637811 containerd[1547]: time="2025-01-29T11:13:17.636673479Z" level=info msg="StopPodSandbox for \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\"" Jan 29 11:13:17.637811 containerd[1547]: time="2025-01-29T11:13:17.636834839Z" level=info msg="Ensure that sandbox 89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f in task-service has been cleanup successfully" Jan 29 11:13:17.637811 containerd[1547]: time="2025-01-29T11:13:17.637152080Z" level=info msg="TearDown network for sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\" successfully" Jan 29 11:13:17.637811 containerd[1547]: time="2025-01-29T11:13:17.637172440Z" level=info msg="StopPodSandbox for \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\" returns successfully" Jan 29 11:13:17.638948 containerd[1547]: time="2025-01-29T11:13:17.638433962Z" level=info msg="StopPodSandbox for \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\"" Jan 29 11:13:17.638948 containerd[1547]: time="2025-01-29T11:13:17.638525522Z" level=info msg="TearDown network for sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" successfully" Jan 29 11:13:17.638948 containerd[1547]: time="2025-01-29T11:13:17.638538402Z" level=info msg="StopPodSandbox for \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" returns successfully" Jan 29 11:13:17.639062 kubelet[2768]: E0129 11:13:17.638824 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:17.639133 containerd[1547]: time="2025-01-29T11:13:17.639104123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzp2w,Uid:189a793a-2e8c-4371-8403-accce88972f2,Namespace:kube-system,Attempt:2,}" Jan 29 11:13:17.640240 systemd[1]: run-netns-cni\x2dbf079ff6\x2d627a\x2db85e\x2ddeea\x2d305332e8ad9b.mount: Deactivated successfully. Jan 29 11:13:17.641149 kubelet[2768]: I0129 11:13:17.641069 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703" Jan 29 11:13:17.644201 containerd[1547]: time="2025-01-29T11:13:17.643193051Z" level=info msg="StopPodSandbox for \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\"" Jan 29 11:13:17.644201 containerd[1547]: time="2025-01-29T11:13:17.643355971Z" level=info msg="Ensure that sandbox 468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703 in task-service has been cleanup successfully" Jan 29 11:13:17.644201 containerd[1547]: time="2025-01-29T11:13:17.643574852Z" level=info msg="TearDown network for sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\" successfully" Jan 29 11:13:17.644201 containerd[1547]: time="2025-01-29T11:13:17.643592892Z" level=info msg="StopPodSandbox for \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\" returns successfully" Jan 29 11:13:17.644382 containerd[1547]: time="2025-01-29T11:13:17.644357133Z" level=info msg="StopPodSandbox for \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\"" Jan 29 11:13:17.644473 containerd[1547]: time="2025-01-29T11:13:17.644454373Z" level=info msg="TearDown network for sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" successfully" Jan 29 11:13:17.644536 containerd[1547]: time="2025-01-29T11:13:17.644507973Z" level=info msg="StopPodSandbox for \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" returns successfully" Jan 29 11:13:17.644996 containerd[1547]: time="2025-01-29T11:13:17.644970774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5cc644f-g54rc,Uid:9ff24cea-d161-45f7-b61c-4aeb38cb42ab,Namespace:calico-system,Attempt:2,}" Jan 29 11:13:17.647804 kubelet[2768]: I0129 11:13:17.647628 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca" Jan 29 11:13:17.648105 systemd[1]: run-netns-cni\x2dec0b0f9f\x2d8c59\x2ddd40\x2db484\x2dfb71297eaafa.mount: Deactivated successfully. Jan 29 11:13:17.649338 containerd[1547]: time="2025-01-29T11:13:17.649305382Z" level=info msg="StopPodSandbox for \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\"" Jan 29 11:13:17.649498 containerd[1547]: time="2025-01-29T11:13:17.649472663Z" level=info msg="Ensure that sandbox a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca in task-service has been cleanup successfully" Jan 29 11:13:17.649819 containerd[1547]: time="2025-01-29T11:13:17.649799103Z" level=info msg="TearDown network for sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\" successfully" Jan 29 11:13:17.649819 containerd[1547]: time="2025-01-29T11:13:17.649821023Z" level=info msg="StopPodSandbox for \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\" returns successfully" Jan 29 11:13:17.652164 containerd[1547]: time="2025-01-29T11:13:17.652127508Z" level=info msg="StopPodSandbox for \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\"" Jan 29 11:13:17.653159 systemd[1]: run-netns-cni\x2dc959eb8c\x2da654\x2d5de0\x2db5d7\x2d992acd43860d.mount: Deactivated successfully. Jan 29 11:13:17.653353 containerd[1547]: time="2025-01-29T11:13:17.653328790Z" level=info msg="TearDown network for sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" successfully" Jan 29 11:13:17.653401 containerd[1547]: time="2025-01-29T11:13:17.653390590Z" level=info msg="StopPodSandbox for \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" returns successfully" Jan 29 11:13:17.654560 kubelet[2768]: I0129 11:13:17.654541 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5" Jan 29 11:13:17.654692 containerd[1547]: time="2025-01-29T11:13:17.654663632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-cn6bv,Uid:28a382ce-ce53-43b1-8797-59a017f4f410,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:13:17.655525 containerd[1547]: time="2025-01-29T11:13:17.655499594Z" level=info msg="StopPodSandbox for \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\"" Jan 29 11:13:17.655756 containerd[1547]: time="2025-01-29T11:13:17.655663074Z" level=info msg="Ensure that sandbox 4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5 in task-service has been cleanup successfully" Jan 29 11:13:17.656004 containerd[1547]: time="2025-01-29T11:13:17.655933435Z" level=info msg="TearDown network for sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\" successfully" Jan 29 11:13:17.658998 containerd[1547]: time="2025-01-29T11:13:17.656006555Z" level=info msg="StopPodSandbox for \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\" returns successfully" Jan 29 11:13:17.659185 kubelet[2768]: I0129 11:13:17.659139 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e" Jan 29 11:13:17.659910 systemd[1]: run-netns-cni\x2d7dacf312\x2d764a\x2d3ad8\x2da46d\x2d01f5bf3ba65f.mount: Deactivated successfully. Jan 29 11:13:17.660035 containerd[1547]: time="2025-01-29T11:13:17.659938042Z" level=info msg="StopPodSandbox for \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\"" Jan 29 11:13:17.660102 containerd[1547]: time="2025-01-29T11:13:17.660082483Z" level=info msg="Ensure that sandbox 02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e in task-service has been cleanup successfully" Jan 29 11:13:17.660294 containerd[1547]: time="2025-01-29T11:13:17.660265843Z" level=info msg="TearDown network for sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\" successfully" Jan 29 11:13:17.660294 containerd[1547]: time="2025-01-29T11:13:17.660286683Z" level=info msg="StopPodSandbox for \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\" returns successfully" Jan 29 11:13:17.660366 containerd[1547]: time="2025-01-29T11:13:17.660356283Z" level=info msg="StopPodSandbox for \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\"" Jan 29 11:13:17.660711 containerd[1547]: time="2025-01-29T11:13:17.660432883Z" level=info msg="TearDown network for sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" successfully" Jan 29 11:13:17.660711 containerd[1547]: time="2025-01-29T11:13:17.660447443Z" level=info msg="StopPodSandbox for \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" returns successfully" Jan 29 11:13:17.660921 containerd[1547]: time="2025-01-29T11:13:17.660899524Z" level=info msg="StopPodSandbox for \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\"" Jan 29 11:13:17.661197 containerd[1547]: time="2025-01-29T11:13:17.661122045Z" level=info msg="TearDown network for sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" successfully" Jan 29 11:13:17.661197 containerd[1547]: time="2025-01-29T11:13:17.661142205Z" level=info msg="StopPodSandbox for \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" returns successfully" Jan 29 11:13:17.661364 kubelet[2768]: E0129 11:13:17.661335 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:17.661616 containerd[1547]: time="2025-01-29T11:13:17.661591965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-sk7nj,Uid:932cd32e-4633-4030-9143-1db1b953d083,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:13:17.661662 kubelet[2768]: I0129 11:13:17.661642 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea" Jan 29 11:13:17.662083 containerd[1547]: time="2025-01-29T11:13:17.662057766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w884t,Uid:a0701364-f7fc-4c5f-95e0-131b1446a6f8,Namespace:kube-system,Attempt:2,}" Jan 29 11:13:17.662567 containerd[1547]: time="2025-01-29T11:13:17.662543207Z" level=info msg="StopPodSandbox for \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\"" Jan 29 11:13:17.662713 containerd[1547]: time="2025-01-29T11:13:17.662695087Z" level=info msg="Ensure that sandbox 11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea in task-service has been cleanup successfully" Jan 29 11:13:17.662986 containerd[1547]: time="2025-01-29T11:13:17.662965128Z" level=info msg="TearDown network for sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\" successfully" Jan 29 11:13:17.663049 containerd[1547]: time="2025-01-29T11:13:17.662986648Z" level=info msg="StopPodSandbox for \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\" returns successfully" Jan 29 11:13:17.663620 containerd[1547]: time="2025-01-29T11:13:17.663598329Z" level=info msg="StopPodSandbox for \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\"" Jan 29 11:13:17.663868 containerd[1547]: time="2025-01-29T11:13:17.663849490Z" level=info msg="TearDown network for sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" successfully" Jan 29 11:13:17.663908 containerd[1547]: time="2025-01-29T11:13:17.663868450Z" level=info msg="StopPodSandbox for \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" returns successfully" Jan 29 11:13:17.665248 containerd[1547]: time="2025-01-29T11:13:17.665218572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whhjq,Uid:e5725830-f2eb-461f-bca5-bd9a3c65abd6,Namespace:calico-system,Attempt:2,}" Jan 29 11:13:17.947156 containerd[1547]: time="2025-01-29T11:13:17.947102179Z" level=error msg="Failed to destroy network for sandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.947449 containerd[1547]: time="2025-01-29T11:13:17.947422300Z" level=error msg="encountered an error cleaning up failed sandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.947519 containerd[1547]: time="2025-01-29T11:13:17.947481900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzp2w,Uid:189a793a-2e8c-4371-8403-accce88972f2,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.947745 kubelet[2768]: E0129 11:13:17.947702 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.948109 kubelet[2768]: E0129 11:13:17.947929 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qzp2w" Jan 29 11:13:17.948109 kubelet[2768]: E0129 11:13:17.947957 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qzp2w" Jan 29 11:13:17.948109 kubelet[2768]: E0129 11:13:17.948005 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qzp2w_kube-system(189a793a-2e8c-4371-8403-accce88972f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qzp2w_kube-system(189a793a-2e8c-4371-8403-accce88972f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qzp2w" podUID="189a793a-2e8c-4371-8403-accce88972f2" Jan 29 11:13:17.954147 containerd[1547]: time="2025-01-29T11:13:17.953095270Z" level=error msg="Failed to destroy network for sandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.954255 containerd[1547]: time="2025-01-29T11:13:17.954036912Z" level=error msg="encountered an error cleaning up failed sandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.954293 containerd[1547]: time="2025-01-29T11:13:17.954268512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-sk7nj,Uid:932cd32e-4633-4030-9143-1db1b953d083,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.954506 kubelet[2768]: E0129 11:13:17.954478 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.954622 kubelet[2768]: E0129 11:13:17.954606 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" Jan 29 11:13:17.954694 kubelet[2768]: E0129 11:13:17.954679 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" Jan 29 11:13:17.954818 kubelet[2768]: E0129 11:13:17.954797 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6f7d4488-sk7nj_calico-apiserver(932cd32e-4633-4030-9143-1db1b953d083)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6f7d4488-sk7nj_calico-apiserver(932cd32e-4633-4030-9143-1db1b953d083)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" podUID="932cd32e-4633-4030-9143-1db1b953d083" Jan 29 11:13:17.965223 containerd[1547]: time="2025-01-29T11:13:17.965183613Z" level=error msg="Failed to destroy network for sandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.966338 containerd[1547]: time="2025-01-29T11:13:17.966307735Z" level=error msg="encountered an error cleaning up failed sandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.966391 containerd[1547]: time="2025-01-29T11:13:17.966365735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-cn6bv,Uid:28a382ce-ce53-43b1-8797-59a017f4f410,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.966958 kubelet[2768]: E0129 11:13:17.966548 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.966958 kubelet[2768]: E0129 11:13:17.966604 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" Jan 29 11:13:17.966958 kubelet[2768]: E0129 11:13:17.966625 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" Jan 29 11:13:17.967237 kubelet[2768]: E0129 11:13:17.966660 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6f7d4488-cn6bv_calico-apiserver(28a382ce-ce53-43b1-8797-59a017f4f410)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6f7d4488-cn6bv_calico-apiserver(28a382ce-ce53-43b1-8797-59a017f4f410)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" podUID="28a382ce-ce53-43b1-8797-59a017f4f410" Jan 29 11:13:17.972308 containerd[1547]: time="2025-01-29T11:13:17.972266386Z" level=error msg="Failed to destroy network for sandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.973296 containerd[1547]: time="2025-01-29T11:13:17.972461546Z" level=error msg="Failed to destroy network for sandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.973296 containerd[1547]: time="2025-01-29T11:13:17.972803347Z" level=error msg="encountered an error cleaning up failed sandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.973296 containerd[1547]: time="2025-01-29T11:13:17.972852627Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w884t,Uid:a0701364-f7fc-4c5f-95e0-131b1446a6f8,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.973429 kubelet[2768]: E0129 11:13:17.973025 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.973429 kubelet[2768]: E0129 11:13:17.973062 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w884t" Jan 29 11:13:17.973429 kubelet[2768]: E0129 11:13:17.973080 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w884t" Jan 29 11:13:17.973513 kubelet[2768]: E0129 11:13:17.973112 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-w884t_kube-system(a0701364-f7fc-4c5f-95e0-131b1446a6f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-w884t_kube-system(a0701364-f7fc-4c5f-95e0-131b1446a6f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w884t" podUID="a0701364-f7fc-4c5f-95e0-131b1446a6f8" Jan 29 11:13:17.973557 containerd[1547]: time="2025-01-29T11:13:17.973324788Z" level=error msg="encountered an error cleaning up failed sandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.973557 containerd[1547]: time="2025-01-29T11:13:17.973480308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5cc644f-g54rc,Uid:9ff24cea-d161-45f7-b61c-4aeb38cb42ab,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.974932 kubelet[2768]: E0129 11:13:17.973698 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.974932 kubelet[2768]: E0129 11:13:17.974838 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" Jan 29 11:13:17.974932 kubelet[2768]: E0129 11:13:17.974867 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" Jan 29 11:13:17.975090 kubelet[2768]: E0129 11:13:17.974900 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ff5cc644f-g54rc_calico-system(9ff24cea-d161-45f7-b61c-4aeb38cb42ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ff5cc644f-g54rc_calico-system(9ff24cea-d161-45f7-b61c-4aeb38cb42ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" podUID="9ff24cea-d161-45f7-b61c-4aeb38cb42ab" Jan 29 11:13:17.976954 containerd[1547]: time="2025-01-29T11:13:17.976909875Z" level=error msg="Failed to destroy network for sandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.977296 containerd[1547]: time="2025-01-29T11:13:17.977266275Z" level=error msg="encountered an error cleaning up failed sandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.977328 containerd[1547]: time="2025-01-29T11:13:17.977316236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whhjq,Uid:e5725830-f2eb-461f-bca5-bd9a3c65abd6,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.977558 kubelet[2768]: E0129 11:13:17.977454 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:17.977558 kubelet[2768]: E0129 11:13:17.977488 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:17.977558 kubelet[2768]: E0129 11:13:17.977503 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:17.977653 kubelet[2768]: E0129 11:13:17.977527 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-whhjq_calico-system(e5725830-f2eb-461f-bca5-bd9a3c65abd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-whhjq_calico-system(e5725830-f2eb-461f-bca5-bd9a3c65abd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-whhjq" podUID="e5725830-f2eb-461f-bca5-bd9a3c65abd6" Jan 29 11:13:18.051661 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:34818.service - OpenSSH per-connection server daemon (10.0.0.1:34818). Jan 29 11:13:18.100922 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 34818 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:18.102599 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:18.107265 systemd-logind[1528]: New session 9 of user core. Jan 29 11:13:18.112987 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:13:18.275022 sshd[4281]: Connection closed by 10.0.0.1 port 34818 Jan 29 11:13:18.275272 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:18.278497 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:34818.service: Deactivated successfully. Jan 29 11:13:18.280723 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:13:18.280885 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:13:18.283164 systemd-logind[1528]: Removed session 9. Jan 29 11:13:18.493085 systemd[1]: run-netns-cni\x2dbe50cf5b\x2d1570\x2db055\x2d3f31\x2dd5e52f8ea059.mount: Deactivated successfully. Jan 29 11:13:18.493466 systemd[1]: run-netns-cni\x2dc1c5fa25\x2df9cc\x2dd464\x2d6e26\x2d15a114d228aa.mount: Deactivated successfully. Jan 29 11:13:18.666437 kubelet[2768]: I0129 11:13:18.664414 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034" Jan 29 11:13:18.667677 containerd[1547]: time="2025-01-29T11:13:18.665133721Z" level=info msg="StopPodSandbox for \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\"" Jan 29 11:13:18.667677 containerd[1547]: time="2025-01-29T11:13:18.665297881Z" level=info msg="Ensure that sandbox d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034 in task-service has been cleanup successfully" Jan 29 11:13:18.667677 containerd[1547]: time="2025-01-29T11:13:18.665471401Z" level=info msg="TearDown network for sandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\" successfully" Jan 29 11:13:18.667677 containerd[1547]: time="2025-01-29T11:13:18.665487362Z" level=info msg="StopPodSandbox for \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\" returns successfully" Jan 29 11:13:18.667677 containerd[1547]: time="2025-01-29T11:13:18.665986882Z" level=info msg="StopPodSandbox for \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\"" Jan 29 11:13:18.667677 containerd[1547]: time="2025-01-29T11:13:18.666074003Z" level=info msg="TearDown network for sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\" successfully" Jan 29 11:13:18.667677 containerd[1547]: time="2025-01-29T11:13:18.666084283Z" level=info msg="StopPodSandbox for \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\" returns successfully" Jan 29 11:13:18.667677 containerd[1547]: time="2025-01-29T11:13:18.666839724Z" level=info msg="StopPodSandbox for \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\"" Jan 29 11:13:18.668118 containerd[1547]: time="2025-01-29T11:13:18.668059446Z" level=info msg="TearDown network for sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" successfully" Jan 29 11:13:18.668160 containerd[1547]: time="2025-01-29T11:13:18.668112366Z" level=info msg="StopPodSandbox for \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" returns successfully" Jan 29 11:13:18.668846 systemd[1]: run-netns-cni\x2d365bce42\x2d6c75\x2de984\x2d2f09\x2d176440ee0533.mount: Deactivated successfully. Jan 29 11:13:18.669583 containerd[1547]: time="2025-01-29T11:13:18.669146288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-sk7nj,Uid:932cd32e-4633-4030-9143-1db1b953d083,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:13:18.670099 kubelet[2768]: I0129 11:13:18.670072 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094" Jan 29 11:13:18.670709 containerd[1547]: time="2025-01-29T11:13:18.670677011Z" level=info msg="StopPodSandbox for \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\"" Jan 29 11:13:18.670867 containerd[1547]: time="2025-01-29T11:13:18.670845411Z" level=info msg="Ensure that sandbox 8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094 in task-service has been cleanup successfully" Jan 29 11:13:18.671744 containerd[1547]: time="2025-01-29T11:13:18.671459412Z" level=info msg="TearDown network for sandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\" successfully" Jan 29 11:13:18.671744 containerd[1547]: time="2025-01-29T11:13:18.671484412Z" level=info msg="StopPodSandbox for \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\" returns successfully" Jan 29 11:13:18.672324 containerd[1547]: time="2025-01-29T11:13:18.672059493Z" level=info msg="StopPodSandbox for \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\"" Jan 29 11:13:18.672488 containerd[1547]: time="2025-01-29T11:13:18.672467854Z" level=info msg="TearDown network for sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\" successfully" Jan 29 11:13:18.672855 containerd[1547]: time="2025-01-29T11:13:18.672742175Z" level=info msg="StopPodSandbox for \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\" returns successfully" Jan 29 11:13:18.673519 containerd[1547]: time="2025-01-29T11:13:18.673492376Z" level=info msg="StopPodSandbox for \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\"" Jan 29 11:13:18.674332 kubelet[2768]: I0129 11:13:18.673729 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326" Jan 29 11:13:18.674398 containerd[1547]: time="2025-01-29T11:13:18.674205537Z" level=info msg="TearDown network for sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" successfully" Jan 29 11:13:18.674754 systemd[1]: run-netns-cni\x2df8a1e461\x2da7d5\x2df204\x2d915c\x2d3688e95bdeb6.mount: Deactivated successfully. Jan 29 11:13:18.675646 containerd[1547]: time="2025-01-29T11:13:18.674223017Z" level=info msg="StopPodSandbox for \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" returns successfully" Jan 29 11:13:18.675646 containerd[1547]: time="2025-01-29T11:13:18.675368819Z" level=info msg="StopPodSandbox for \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\"" Jan 29 11:13:18.675646 containerd[1547]: time="2025-01-29T11:13:18.675533980Z" level=info msg="Ensure that sandbox 4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326 in task-service has been cleanup successfully" Jan 29 11:13:18.676048 kubelet[2768]: E0129 11:13:18.676025 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:18.677945 containerd[1547]: time="2025-01-29T11:13:18.677879544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w884t,Uid:a0701364-f7fc-4c5f-95e0-131b1446a6f8,Namespace:kube-system,Attempt:3,}" Jan 29 11:13:18.678137 systemd[1]: run-netns-cni\x2d182c65c1\x2d72d5\x2d56c7\x2d177d\x2d5fb10dd04473.mount: Deactivated successfully. Jan 29 11:13:18.679841 containerd[1547]: time="2025-01-29T11:13:18.679749907Z" level=info msg="TearDown network for sandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\" successfully" Jan 29 11:13:18.679841 containerd[1547]: time="2025-01-29T11:13:18.679779307Z" level=info msg="StopPodSandbox for \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\" returns successfully" Jan 29 11:13:18.680111 containerd[1547]: time="2025-01-29T11:13:18.680088708Z" level=info msg="StopPodSandbox for \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\"" Jan 29 11:13:18.680189 containerd[1547]: time="2025-01-29T11:13:18.680173588Z" level=info msg="TearDown network for sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\" successfully" Jan 29 11:13:18.680228 containerd[1547]: time="2025-01-29T11:13:18.680188228Z" level=info msg="StopPodSandbox for \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\" returns successfully" Jan 29 11:13:18.680744 containerd[1547]: time="2025-01-29T11:13:18.680581509Z" level=info msg="StopPodSandbox for \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\"" Jan 29 11:13:18.680744 containerd[1547]: time="2025-01-29T11:13:18.680678069Z" level=info msg="TearDown network for sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" successfully" Jan 29 11:13:18.680744 containerd[1547]: time="2025-01-29T11:13:18.680689149Z" level=info msg="StopPodSandbox for \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" returns successfully" Jan 29 11:13:18.681066 kubelet[2768]: E0129 11:13:18.681026 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:18.681592 containerd[1547]: time="2025-01-29T11:13:18.681285270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzp2w,Uid:189a793a-2e8c-4371-8403-accce88972f2,Namespace:kube-system,Attempt:3,}" Jan 29 11:13:18.685989 kubelet[2768]: I0129 11:13:18.685726 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666" Jan 29 11:13:18.686459 containerd[1547]: time="2025-01-29T11:13:18.686390719Z" level=info msg="StopPodSandbox for \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\"" Jan 29 11:13:18.686610 containerd[1547]: time="2025-01-29T11:13:18.686570120Z" level=info msg="Ensure that sandbox 3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666 in task-service has been cleanup successfully" Jan 29 11:13:18.688125 containerd[1547]: time="2025-01-29T11:13:18.687152681Z" level=info msg="TearDown network for sandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\" successfully" Jan 29 11:13:18.688125 containerd[1547]: time="2025-01-29T11:13:18.687422801Z" level=info msg="StopPodSandbox for \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\" returns successfully" Jan 29 11:13:18.688377 containerd[1547]: time="2025-01-29T11:13:18.688097002Z" level=info msg="StopPodSandbox for \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\"" Jan 29 11:13:18.688525 containerd[1547]: time="2025-01-29T11:13:18.688508083Z" level=info msg="TearDown network for sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\" successfully" Jan 29 11:13:18.688597 containerd[1547]: time="2025-01-29T11:13:18.688583883Z" level=info msg="StopPodSandbox for \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\" returns successfully" Jan 29 11:13:18.690857 containerd[1547]: time="2025-01-29T11:13:18.690047206Z" level=info msg="StopPodSandbox for \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\"" Jan 29 11:13:18.690857 containerd[1547]: time="2025-01-29T11:13:18.690237606Z" level=info msg="TearDown network for sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" successfully" Jan 29 11:13:18.690857 containerd[1547]: time="2025-01-29T11:13:18.690279446Z" level=info msg="StopPodSandbox for \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" returns successfully" Jan 29 11:13:18.690278 systemd[1]: run-netns-cni\x2dc2701afc\x2d4517\x2d0d57\x2d80e3\x2d8603a5d822d9.mount: Deactivated successfully. Jan 29 11:13:18.691273 kubelet[2768]: I0129 11:13:18.691239 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d" Jan 29 11:13:18.692831 containerd[1547]: time="2025-01-29T11:13:18.692776411Z" level=info msg="StopPodSandbox for \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\"" Jan 29 11:13:18.693016 containerd[1547]: time="2025-01-29T11:13:18.692988931Z" level=info msg="Ensure that sandbox a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d in task-service has been cleanup successfully" Jan 29 11:13:18.693313 containerd[1547]: time="2025-01-29T11:13:18.693276532Z" level=info msg="TearDown network for sandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\" successfully" Jan 29 11:13:18.693538 containerd[1547]: time="2025-01-29T11:13:18.693313572Z" level=info msg="StopPodSandbox for \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\" returns successfully" Jan 29 11:13:18.693765 containerd[1547]: time="2025-01-29T11:13:18.693739213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5cc644f-g54rc,Uid:9ff24cea-d161-45f7-b61c-4aeb38cb42ab,Namespace:calico-system,Attempt:3,}" Jan 29 11:13:18.697869 containerd[1547]: time="2025-01-29T11:13:18.697828500Z" level=info msg="StopPodSandbox for \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\"" Jan 29 11:13:18.697969 containerd[1547]: time="2025-01-29T11:13:18.697948060Z" level=info msg="TearDown network for sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\" successfully" Jan 29 11:13:18.697969 containerd[1547]: time="2025-01-29T11:13:18.697966260Z" level=info msg="StopPodSandbox for \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\" returns successfully" Jan 29 11:13:18.698462 containerd[1547]: time="2025-01-29T11:13:18.698428901Z" level=info msg="StopPodSandbox for \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\"" Jan 29 11:13:18.698543 containerd[1547]: time="2025-01-29T11:13:18.698525661Z" level=info msg="TearDown network for sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" successfully" Jan 29 11:13:18.698587 containerd[1547]: time="2025-01-29T11:13:18.698541741Z" level=info msg="StopPodSandbox for \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" returns successfully" Jan 29 11:13:18.699246 containerd[1547]: time="2025-01-29T11:13:18.699211023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whhjq,Uid:e5725830-f2eb-461f-bca5-bd9a3c65abd6,Namespace:calico-system,Attempt:3,}" Jan 29 11:13:18.700415 kubelet[2768]: I0129 11:13:18.700373 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1" Jan 29 11:13:18.702372 containerd[1547]: time="2025-01-29T11:13:18.701904747Z" level=info msg="StopPodSandbox for \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\"" Jan 29 11:13:18.702372 containerd[1547]: time="2025-01-29T11:13:18.702219868Z" level=info msg="Ensure that sandbox c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1 in task-service has been cleanup successfully" Jan 29 11:13:18.702557 containerd[1547]: time="2025-01-29T11:13:18.702535349Z" level=info msg="TearDown network for sandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\" successfully" Jan 29 11:13:18.702646 containerd[1547]: time="2025-01-29T11:13:18.702633589Z" level=info msg="StopPodSandbox for \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\" returns successfully" Jan 29 11:13:18.703572 containerd[1547]: time="2025-01-29T11:13:18.703550510Z" level=info msg="StopPodSandbox for \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\"" Jan 29 11:13:18.703773 containerd[1547]: time="2025-01-29T11:13:18.703755311Z" level=info msg="TearDown network for sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\" successfully" Jan 29 11:13:18.703855 containerd[1547]: time="2025-01-29T11:13:18.703839711Z" level=info msg="StopPodSandbox for \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\" returns successfully" Jan 29 11:13:18.704268 containerd[1547]: time="2025-01-29T11:13:18.704191112Z" level=info msg="StopPodSandbox for \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\"" Jan 29 11:13:18.704346 containerd[1547]: time="2025-01-29T11:13:18.704330472Z" level=info msg="TearDown network for sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" successfully" Jan 29 11:13:18.704431 containerd[1547]: time="2025-01-29T11:13:18.704346592Z" level=info msg="StopPodSandbox for \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" returns successfully" Jan 29 11:13:18.704896 containerd[1547]: time="2025-01-29T11:13:18.704824833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-cn6bv,Uid:28a382ce-ce53-43b1-8797-59a017f4f410,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:13:18.984099 containerd[1547]: time="2025-01-29T11:13:18.983970697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 11:13:18.985052 containerd[1547]: time="2025-01-29T11:13:18.984333898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:18.985746 containerd[1547]: time="2025-01-29T11:13:18.985711981Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:18.997150 containerd[1547]: time="2025-01-29T11:13:18.997092121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:19.003095 containerd[1547]: time="2025-01-29T11:13:19.002881372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.397844008s" Jan 29 11:13:19.003095 containerd[1547]: time="2025-01-29T11:13:19.003096252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 11:13:19.011963 containerd[1547]: time="2025-01-29T11:13:19.011921267Z" level=info msg="CreateContainer within sandbox \"a7e6de1ad6c0ef121d22838206840dc6bd20f6030ce23e9805b7474d5702361b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:13:19.048173 containerd[1547]: time="2025-01-29T11:13:19.048118571Z" level=info msg="CreateContainer within sandbox \"a7e6de1ad6c0ef121d22838206840dc6bd20f6030ce23e9805b7474d5702361b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"333e80aade81fc2494427c95def1124a50d4557121a543b537a7c6787f2b879c\"" Jan 29 11:13:19.050084 containerd[1547]: time="2025-01-29T11:13:19.048651492Z" level=info msg="StartContainer for \"333e80aade81fc2494427c95def1124a50d4557121a543b537a7c6787f2b879c\"" Jan 29 11:13:19.087711 containerd[1547]: time="2025-01-29T11:13:19.087568040Z" level=error msg="Failed to destroy network for sandbox \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.088217 containerd[1547]: time="2025-01-29T11:13:19.088166441Z" level=error msg="encountered an error cleaning up failed sandbox \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.088501 containerd[1547]: time="2025-01-29T11:13:19.088236841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w884t,Uid:a0701364-f7fc-4c5f-95e0-131b1446a6f8,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.089544 kubelet[2768]: E0129 11:13:19.089480 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.089667 kubelet[2768]: E0129 11:13:19.089556 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w884t" Jan 29 11:13:19.089667 kubelet[2768]: E0129 11:13:19.089585 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-w884t" Jan 29 11:13:19.089667 kubelet[2768]: E0129 11:13:19.089640 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-w884t_kube-system(a0701364-f7fc-4c5f-95e0-131b1446a6f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-w884t_kube-system(a0701364-f7fc-4c5f-95e0-131b1446a6f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-w884t" podUID="a0701364-f7fc-4c5f-95e0-131b1446a6f8" Jan 29 11:13:19.093814 containerd[1547]: time="2025-01-29T11:13:19.093766331Z" level=error msg="Failed to destroy network for sandbox \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.094118 containerd[1547]: time="2025-01-29T11:13:19.094086131Z" level=error msg="encountered an error cleaning up failed sandbox \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.094177 containerd[1547]: time="2025-01-29T11:13:19.094153971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whhjq,Uid:e5725830-f2eb-461f-bca5-bd9a3c65abd6,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.094575 kubelet[2768]: E0129 11:13:19.094395 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.094575 kubelet[2768]: E0129 11:13:19.094466 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:19.094575 kubelet[2768]: E0129 11:13:19.094484 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-whhjq" Jan 29 11:13:19.094693 kubelet[2768]: E0129 11:13:19.094532 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-whhjq_calico-system(e5725830-f2eb-461f-bca5-bd9a3c65abd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-whhjq_calico-system(e5725830-f2eb-461f-bca5-bd9a3c65abd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-whhjq" podUID="e5725830-f2eb-461f-bca5-bd9a3c65abd6" Jan 29 11:13:19.098693 containerd[1547]: time="2025-01-29T11:13:19.098645539Z" level=error msg="Failed to destroy network for sandbox \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.099076 containerd[1547]: time="2025-01-29T11:13:19.099036660Z" level=error msg="encountered an error cleaning up failed sandbox \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.099546 containerd[1547]: time="2025-01-29T11:13:19.099108740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzp2w,Uid:189a793a-2e8c-4371-8403-accce88972f2,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.100224 kubelet[2768]: E0129 11:13:19.100178 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.100302 kubelet[2768]: E0129 11:13:19.100232 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qzp2w" Jan 29 11:13:19.100302 kubelet[2768]: E0129 11:13:19.100263 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qzp2w" Jan 29 11:13:19.100354 kubelet[2768]: E0129 11:13:19.100307 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qzp2w_kube-system(189a793a-2e8c-4371-8403-accce88972f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qzp2w_kube-system(189a793a-2e8c-4371-8403-accce88972f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qzp2w" podUID="189a793a-2e8c-4371-8403-accce88972f2" Jan 29 11:13:19.112169 containerd[1547]: time="2025-01-29T11:13:19.112126483Z" level=error msg="Failed to destroy network for sandbox \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.112487 containerd[1547]: time="2025-01-29T11:13:19.112458884Z" level=error msg="encountered an error cleaning up failed sandbox \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.112542 containerd[1547]: time="2025-01-29T11:13:19.112521204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-cn6bv,Uid:28a382ce-ce53-43b1-8797-59a017f4f410,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.112771 kubelet[2768]: E0129 11:13:19.112734 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.112831 kubelet[2768]: E0129 11:13:19.112792 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" Jan 29 11:13:19.112831 kubelet[2768]: E0129 11:13:19.112812 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" Jan 29 11:13:19.112878 kubelet[2768]: E0129 11:13:19.112851 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6f7d4488-cn6bv_calico-apiserver(28a382ce-ce53-43b1-8797-59a017f4f410)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6f7d4488-cn6bv_calico-apiserver(28a382ce-ce53-43b1-8797-59a017f4f410)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" podUID="28a382ce-ce53-43b1-8797-59a017f4f410" Jan 29 11:13:19.116271 containerd[1547]: time="2025-01-29T11:13:19.116224450Z" level=error msg="Failed to destroy network for sandbox \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.116550 containerd[1547]: time="2025-01-29T11:13:19.116523771Z" level=error msg="encountered an error cleaning up failed sandbox \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.116600 containerd[1547]: time="2025-01-29T11:13:19.116578131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-sk7nj,Uid:932cd32e-4633-4030-9143-1db1b953d083,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.116825 kubelet[2768]: E0129 11:13:19.116793 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.116866 kubelet[2768]: E0129 11:13:19.116846 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" Jan 29 11:13:19.116898 kubelet[2768]: E0129 11:13:19.116864 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" Jan 29 11:13:19.116932 kubelet[2768]: E0129 11:13:19.116907 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c6f7d4488-sk7nj_calico-apiserver(932cd32e-4633-4030-9143-1db1b953d083)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c6f7d4488-sk7nj_calico-apiserver(932cd32e-4633-4030-9143-1db1b953d083)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" podUID="932cd32e-4633-4030-9143-1db1b953d083" Jan 29 11:13:19.123814 containerd[1547]: time="2025-01-29T11:13:19.123769223Z" level=error msg="Failed to destroy network for sandbox \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.124087 containerd[1547]: time="2025-01-29T11:13:19.124061264Z" level=error msg="encountered an error cleaning up failed sandbox \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.124138 containerd[1547]: time="2025-01-29T11:13:19.124119104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5cc644f-g54rc,Uid:9ff24cea-d161-45f7-b61c-4aeb38cb42ab,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.124311 kubelet[2768]: E0129 11:13:19.124276 2768 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:13:19.124361 kubelet[2768]: E0129 11:13:19.124332 2768 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" Jan 29 11:13:19.124361 kubelet[2768]: E0129 11:13:19.124352 2768 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" Jan 29 11:13:19.124429 kubelet[2768]: E0129 11:13:19.124383 2768 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-ff5cc644f-g54rc_calico-system(9ff24cea-d161-45f7-b61c-4aeb38cb42ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-ff5cc644f-g54rc_calico-system(9ff24cea-d161-45f7-b61c-4aeb38cb42ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" podUID="9ff24cea-d161-45f7-b61c-4aeb38cb42ab" Jan 29 11:13:19.159587 containerd[1547]: time="2025-01-29T11:13:19.159537686Z" level=info msg="StartContainer for \"333e80aade81fc2494427c95def1124a50d4557121a543b537a7c6787f2b879c\" returns successfully" Jan 29 11:13:19.328431 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:13:19.328547 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:13:19.496085 systemd[1]: run-netns-cni\x2dc9ce6a8e\x2dd8d6\x2d4c15\x2dcd68\x2d0a51c1c492a4.mount: Deactivated successfully. Jan 29 11:13:19.496236 systemd[1]: run-netns-cni\x2de68303c1\x2d2f61\x2d48ac\x2dafd1\x2d5a3612e9c8af.mount: Deactivated successfully. Jan 29 11:13:19.496329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146034856.mount: Deactivated successfully. Jan 29 11:13:19.705117 kubelet[2768]: E0129 11:13:19.704959 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:19.708648 kubelet[2768]: I0129 11:13:19.707361 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8" Jan 29 11:13:19.708747 containerd[1547]: time="2025-01-29T11:13:19.707801166Z" level=info msg="StopPodSandbox for \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\"" Jan 29 11:13:19.708747 containerd[1547]: time="2025-01-29T11:13:19.707967167Z" level=info msg="Ensure that sandbox ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8 in task-service has been cleanup successfully" Jan 29 11:13:19.710661 systemd[1]: run-netns-cni\x2d69b7063a\x2dcfdb\x2d060f\x2d08bf\x2df298309e8d03.mount: Deactivated successfully. Jan 29 11:13:19.712146 containerd[1547]: time="2025-01-29T11:13:19.711524973Z" level=info msg="TearDown network for sandbox \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\" successfully" Jan 29 11:13:19.712146 containerd[1547]: time="2025-01-29T11:13:19.711563973Z" level=info msg="StopPodSandbox for \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\" returns successfully" Jan 29 11:13:19.712888 kubelet[2768]: I0129 11:13:19.712562 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52" Jan 29 11:13:19.713000 containerd[1547]: time="2025-01-29T11:13:19.712971895Z" level=info msg="StopPodSandbox for \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\"" Jan 29 11:13:19.713152 containerd[1547]: time="2025-01-29T11:13:19.713133736Z" level=info msg="Ensure that sandbox aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52 in task-service has been cleanup successfully" Jan 29 11:13:19.713844 containerd[1547]: time="2025-01-29T11:13:19.713643216Z" level=info msg="StopPodSandbox for \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\"" Jan 29 11:13:19.713844 containerd[1547]: time="2025-01-29T11:13:19.713719057Z" level=info msg="TearDown network for sandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\" successfully" Jan 29 11:13:19.713844 containerd[1547]: time="2025-01-29T11:13:19.713730697Z" level=info msg="StopPodSandbox for \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\" returns successfully" Jan 29 11:13:19.715723 systemd[1]: run-netns-cni\x2d593860c8\x2d8c06\x2d0461\x2dee10\x2d39e4d3405c0c.mount: Deactivated successfully. Jan 29 11:13:19.716221 containerd[1547]: time="2025-01-29T11:13:19.716046821Z" level=info msg="TearDown network for sandbox \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\" successfully" Jan 29 11:13:19.716221 containerd[1547]: time="2025-01-29T11:13:19.716074101Z" level=info msg="StopPodSandbox for \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\" returns successfully" Jan 29 11:13:19.716605 containerd[1547]: time="2025-01-29T11:13:19.716571182Z" level=info msg="StopPodSandbox for \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\"" Jan 29 11:13:19.716669 containerd[1547]: time="2025-01-29T11:13:19.716654862Z" level=info msg="TearDown network for sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\" successfully" Jan 29 11:13:19.716669 containerd[1547]: time="2025-01-29T11:13:19.716668782Z" level=info msg="StopPodSandbox for \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\" returns successfully" Jan 29 11:13:19.717195 containerd[1547]: time="2025-01-29T11:13:19.717168463Z" level=info msg="StopPodSandbox for \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\"" Jan 29 11:13:19.717323 containerd[1547]: time="2025-01-29T11:13:19.717306783Z" level=info msg="TearDown network for sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" successfully" Jan 29 11:13:19.717362 containerd[1547]: time="2025-01-29T11:13:19.717323743Z" level=info msg="StopPodSandbox for \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" returns successfully" Jan 29 11:13:19.717507 containerd[1547]: time="2025-01-29T11:13:19.717461743Z" level=info msg="StopPodSandbox for \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\"" Jan 29 11:13:19.717576 containerd[1547]: time="2025-01-29T11:13:19.717560943Z" level=info msg="TearDown network for sandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\" successfully" Jan 29 11:13:19.717620 containerd[1547]: time="2025-01-29T11:13:19.717575263Z" level=info msg="StopPodSandbox for \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\" returns successfully" Jan 29 11:13:19.718564 kubelet[2768]: E0129 11:13:19.718544 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:19.718808 containerd[1547]: time="2025-01-29T11:13:19.718731425Z" level=info msg="StopPodSandbox for \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\"" Jan 29 11:13:19.718890 containerd[1547]: time="2025-01-29T11:13:19.718854986Z" level=info msg="TearDown network for sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\" successfully" Jan 29 11:13:19.718890 containerd[1547]: time="2025-01-29T11:13:19.718870026Z" level=info msg="StopPodSandbox for \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\" returns successfully" Jan 29 11:13:19.718972 containerd[1547]: time="2025-01-29T11:13:19.718865346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzp2w,Uid:189a793a-2e8c-4371-8403-accce88972f2,Namespace:kube-system,Attempt:4,}" Jan 29 11:13:19.719000 kubelet[2768]: I0129 11:13:19.718889 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb" Jan 29 11:13:19.719386 containerd[1547]: time="2025-01-29T11:13:19.719166226Z" level=info msg="StopPodSandbox for \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\"" Jan 29 11:13:19.719386 containerd[1547]: time="2025-01-29T11:13:19.719265866Z" level=info msg="TearDown network for sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" successfully" Jan 29 11:13:19.719386 containerd[1547]: time="2025-01-29T11:13:19.719286186Z" level=info msg="StopPodSandbox for \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" returns successfully" Jan 29 11:13:19.719675 containerd[1547]: time="2025-01-29T11:13:19.719634627Z" level=info msg="StopPodSandbox for \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\"" Jan 29 11:13:19.719936 containerd[1547]: time="2025-01-29T11:13:19.719785107Z" level=info msg="Ensure that sandbox 55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb in task-service has been cleanup successfully" Jan 29 11:13:19.720047 containerd[1547]: time="2025-01-29T11:13:19.720028508Z" level=info msg="TearDown network for sandbox \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\" successfully" Jan 29 11:13:19.720182 containerd[1547]: time="2025-01-29T11:13:19.720047748Z" level=info msg="StopPodSandbox for \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\" returns successfully" Jan 29 11:13:19.720320 containerd[1547]: time="2025-01-29T11:13:19.720204428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5cc644f-g54rc,Uid:9ff24cea-d161-45f7-b61c-4aeb38cb42ab,Namespace:calico-system,Attempt:4,}" Jan 29 11:13:19.721488 containerd[1547]: time="2025-01-29T11:13:19.721448630Z" level=info msg="StopPodSandbox for \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\"" Jan 29 11:13:19.721571 containerd[1547]: time="2025-01-29T11:13:19.721539470Z" level=info msg="TearDown network for sandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\" successfully" Jan 29 11:13:19.721571 containerd[1547]: time="2025-01-29T11:13:19.721552190Z" level=info msg="StopPodSandbox for \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\" returns successfully" Jan 29 11:13:19.722358 kubelet[2768]: I0129 11:13:19.722305 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gmkgp" podStartSLOduration=1.6824725759999999 podStartE2EDuration="10.722291672s" podCreationTimestamp="2025-01-29 11:13:09 +0000 UTC" firstStartedPulling="2025-01-29 11:13:09.964844799 +0000 UTC m=+25.542225814" lastFinishedPulling="2025-01-29 11:13:19.004663935 +0000 UTC m=+34.582044910" observedRunningTime="2025-01-29 11:13:19.72110107 +0000 UTC m=+35.298482085" watchObservedRunningTime="2025-01-29 11:13:19.722291672 +0000 UTC m=+35.299672687" Jan 29 11:13:19.723286 systemd[1]: run-netns-cni\x2d6affe1fd\x2d7a7e\x2d8b94\x2d6442\x2d1b966983d943.mount: Deactivated successfully. Jan 29 11:13:19.723855 containerd[1547]: time="2025-01-29T11:13:19.723653714Z" level=info msg="StopPodSandbox for \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\"" Jan 29 11:13:19.723855 containerd[1547]: time="2025-01-29T11:13:19.723743874Z" level=info msg="TearDown network for sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\" successfully" Jan 29 11:13:19.723855 containerd[1547]: time="2025-01-29T11:13:19.723754034Z" level=info msg="StopPodSandbox for \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\" returns successfully" Jan 29 11:13:19.724314 containerd[1547]: time="2025-01-29T11:13:19.724293435Z" level=info msg="StopPodSandbox for \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\"" Jan 29 11:13:19.724701 containerd[1547]: time="2025-01-29T11:13:19.724624156Z" level=info msg="TearDown network for sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" successfully" Jan 29 11:13:19.724701 containerd[1547]: time="2025-01-29T11:13:19.724642116Z" level=info msg="StopPodSandbox for \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" returns successfully" Jan 29 11:13:19.725853 containerd[1547]: time="2025-01-29T11:13:19.725612437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-cn6bv,Uid:28a382ce-ce53-43b1-8797-59a017f4f410,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:13:19.727237 kubelet[2768]: I0129 11:13:19.726834 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc" Jan 29 11:13:19.727320 containerd[1547]: time="2025-01-29T11:13:19.727276640Z" level=info msg="StopPodSandbox for \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\"" Jan 29 11:13:19.727584 containerd[1547]: time="2025-01-29T11:13:19.727447001Z" level=info msg="Ensure that sandbox 878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc in task-service has been cleanup successfully" Jan 29 11:13:19.728981 containerd[1547]: time="2025-01-29T11:13:19.728173802Z" level=info msg="TearDown network for sandbox \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\" successfully" Jan 29 11:13:19.728981 containerd[1547]: time="2025-01-29T11:13:19.728197282Z" level=info msg="StopPodSandbox for \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\" returns successfully" Jan 29 11:13:19.729429 containerd[1547]: time="2025-01-29T11:13:19.729198204Z" level=info msg="StopPodSandbox for \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\"" Jan 29 11:13:19.729429 containerd[1547]: time="2025-01-29T11:13:19.729286924Z" level=info msg="TearDown network for sandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\" successfully" Jan 29 11:13:19.729429 containerd[1547]: time="2025-01-29T11:13:19.729297724Z" level=info msg="StopPodSandbox for \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\" returns successfully" Jan 29 11:13:19.730477 containerd[1547]: time="2025-01-29T11:13:19.730287526Z" level=info msg="StopPodSandbox for \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\"" Jan 29 11:13:19.730477 containerd[1547]: time="2025-01-29T11:13:19.730370166Z" level=info msg="TearDown network for sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\" successfully" Jan 29 11:13:19.730477 containerd[1547]: time="2025-01-29T11:13:19.730389406Z" level=info msg="StopPodSandbox for \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\" returns successfully" Jan 29 11:13:19.730629 systemd[1]: run-netns-cni\x2dc120f81f\x2d1f3c\x2d400f\x2d1bd2\x2d3389f7f7cc34.mount: Deactivated successfully. Jan 29 11:13:19.730905 containerd[1547]: time="2025-01-29T11:13:19.730879887Z" level=info msg="StopPodSandbox for \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\"" Jan 29 11:13:19.731010 containerd[1547]: time="2025-01-29T11:13:19.730961367Z" level=info msg="TearDown network for sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" successfully" Jan 29 11:13:19.731010 containerd[1547]: time="2025-01-29T11:13:19.730976287Z" level=info msg="StopPodSandbox for \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" returns successfully" Jan 29 11:13:19.731446 kubelet[2768]: I0129 11:13:19.731400 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584" Jan 29 11:13:19.732172 containerd[1547]: time="2025-01-29T11:13:19.731929889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-sk7nj,Uid:932cd32e-4633-4030-9143-1db1b953d083,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:13:19.732396 containerd[1547]: time="2025-01-29T11:13:19.732375689Z" level=info msg="StopPodSandbox for \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\"" Jan 29 11:13:19.732903 containerd[1547]: time="2025-01-29T11:13:19.732833370Z" level=info msg="Ensure that sandbox 6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584 in task-service has been cleanup successfully" Jan 29 11:13:19.734242 kubelet[2768]: I0129 11:13:19.734152 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32" Jan 29 11:13:19.734390 containerd[1547]: time="2025-01-29T11:13:19.734119572Z" level=info msg="TearDown network for sandbox \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\" successfully" Jan 29 11:13:19.734390 containerd[1547]: time="2025-01-29T11:13:19.734175732Z" level=info msg="StopPodSandbox for \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\" returns successfully" Jan 29 11:13:19.735845 containerd[1547]: time="2025-01-29T11:13:19.735024014Z" level=info msg="StopPodSandbox for \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\"" Jan 29 11:13:19.735845 containerd[1547]: time="2025-01-29T11:13:19.735376615Z" level=info msg="StopPodSandbox for \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\"" Jan 29 11:13:19.735845 containerd[1547]: time="2025-01-29T11:13:19.735476855Z" level=info msg="TearDown network for sandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\" successfully" Jan 29 11:13:19.735845 containerd[1547]: time="2025-01-29T11:13:19.735488175Z" level=info msg="StopPodSandbox for \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\" returns successfully" Jan 29 11:13:19.735990 containerd[1547]: time="2025-01-29T11:13:19.735939016Z" level=info msg="StopPodSandbox for \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\"" Jan 29 11:13:19.736078 containerd[1547]: time="2025-01-29T11:13:19.736055656Z" level=info msg="Ensure that sandbox 11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32 in task-service has been cleanup successfully" Jan 29 11:13:19.736265 containerd[1547]: time="2025-01-29T11:13:19.736064536Z" level=info msg="TearDown network for sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\" successfully" Jan 29 11:13:19.736265 containerd[1547]: time="2025-01-29T11:13:19.736242936Z" level=info msg="StopPodSandbox for \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\" returns successfully" Jan 29 11:13:19.736456 containerd[1547]: time="2025-01-29T11:13:19.736392176Z" level=info msg="TearDown network for sandbox \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\" successfully" Jan 29 11:13:19.736456 containerd[1547]: time="2025-01-29T11:13:19.736427376Z" level=info msg="StopPodSandbox for \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\" returns successfully" Jan 29 11:13:19.737621 containerd[1547]: time="2025-01-29T11:13:19.737589858Z" level=info msg="StopPodSandbox for \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\"" Jan 29 11:13:19.737849 containerd[1547]: time="2025-01-29T11:13:19.737800899Z" level=info msg="TearDown network for sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" successfully" Jan 29 11:13:19.737849 containerd[1547]: time="2025-01-29T11:13:19.737846099Z" level=info msg="StopPodSandbox for \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" returns successfully" Jan 29 11:13:19.737994 containerd[1547]: time="2025-01-29T11:13:19.737609978Z" level=info msg="StopPodSandbox for \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\"" Jan 29 11:13:19.738083 containerd[1547]: time="2025-01-29T11:13:19.738042979Z" level=info msg="TearDown network for sandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\" successfully" Jan 29 11:13:19.738083 containerd[1547]: time="2025-01-29T11:13:19.738056979Z" level=info msg="StopPodSandbox for \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\" returns successfully" Jan 29 11:13:19.738193 kubelet[2768]: E0129 11:13:19.738106 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:19.738934 containerd[1547]: time="2025-01-29T11:13:19.738384180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w884t,Uid:a0701364-f7fc-4c5f-95e0-131b1446a6f8,Namespace:kube-system,Attempt:4,}" Jan 29 11:13:19.738934 containerd[1547]: time="2025-01-29T11:13:19.738579700Z" level=info msg="StopPodSandbox for \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\"" Jan 29 11:13:19.738934 containerd[1547]: time="2025-01-29T11:13:19.738673380Z" level=info msg="TearDown network for sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\" successfully" Jan 29 11:13:19.738934 containerd[1547]: time="2025-01-29T11:13:19.738685700Z" level=info msg="StopPodSandbox for \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\" returns successfully" Jan 29 11:13:19.739067 containerd[1547]: time="2025-01-29T11:13:19.738950061Z" level=info msg="StopPodSandbox for \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\"" Jan 29 11:13:19.739067 containerd[1547]: time="2025-01-29T11:13:19.739023061Z" level=info msg="TearDown network for sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" successfully" Jan 29 11:13:19.739067 containerd[1547]: time="2025-01-29T11:13:19.739034581Z" level=info msg="StopPodSandbox for \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" returns successfully" Jan 29 11:13:19.739504 containerd[1547]: time="2025-01-29T11:13:19.739476462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whhjq,Uid:e5725830-f2eb-461f-bca5-bd9a3c65abd6,Namespace:calico-system,Attempt:4,}" Jan 29 11:13:20.187975 systemd-networkd[1231]: calib4e3c4d281c: Link UP Jan 29 11:13:20.190657 systemd-networkd[1231]: calib4e3c4d281c: Gained carrier Jan 29 11:13:20.212136 systemd-networkd[1231]: calibd93a3bd12a: Link UP Jan 29 11:13:20.214886 systemd-networkd[1231]: calibd93a3bd12a: Gained carrier Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:19.780 [INFO][4606] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:19.873 [INFO][4606] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0 coredns-7db6d8ff4d- kube-system 189a793a-2e8c-4371-8403-accce88972f2 771 0 2025-01-29 11:13:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-qzp2w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib4e3c4d281c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzp2w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qzp2w-" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:19.873 [INFO][4606] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzp2w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.129 [INFO][4668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" HandleID="k8s-pod-network.7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" Workload="localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.148 [INFO][4668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" HandleID="k8s-pod-network.7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" Workload="localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000680050), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-qzp2w", "timestamp":"2025-01-29 11:13:20.129286658 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.148 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.148 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.149 [INFO][4668] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.151 [INFO][4668] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" host="localhost" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.158 [INFO][4668] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.162 [INFO][4668] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.163 [INFO][4668] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.165 [INFO][4668] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.165 [INFO][4668] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" host="localhost" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.166 [INFO][4668] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6 Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.170 [INFO][4668] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" host="localhost" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.175 [INFO][4668] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" host="localhost" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.175 [INFO][4668] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" host="localhost" Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.175 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:13:20.216005 containerd[1547]: 2025-01-29 11:13:20.175 [INFO][4668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" HandleID="k8s-pod-network.7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" Workload="localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0" Jan 29 11:13:20.216567 containerd[1547]: 2025-01-29 11:13:20.178 [INFO][4606] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzp2w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"189a793a-2e8c-4371-8403-accce88972f2", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-qzp2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e3c4d281c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.216567 containerd[1547]: 2025-01-29 11:13:20.178 [INFO][4606] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzp2w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0" Jan 29 11:13:20.216567 containerd[1547]: 2025-01-29 11:13:20.178 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib4e3c4d281c ContainerID="7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzp2w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0" Jan 29 11:13:20.216567 containerd[1547]: 2025-01-29 11:13:20.190 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzp2w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0" Jan 29 11:13:20.216567 containerd[1547]: 2025-01-29 11:13:20.190 [INFO][4606] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzp2w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"189a793a-2e8c-4371-8403-accce88972f2", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6", Pod:"coredns-7db6d8ff4d-qzp2w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib4e3c4d281c", MAC:"ee:e3:c2:9d:35:46", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.216567 containerd[1547]: 2025-01-29 11:13:20.213 [INFO][4606] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qzp2w" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--qzp2w-eth0" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:19.777 [INFO][4585] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:19.872 [INFO][4585] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0 calico-kube-controllers-ff5cc644f- calico-system 9ff24cea-d161-45f7-b61c-4aeb38cb42ab 773 0 2025-01-29 11:13:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:ff5cc644f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-ff5cc644f-g54rc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibd93a3bd12a [] []}} ContainerID="1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" Namespace="calico-system" Pod="calico-kube-controllers-ff5cc644f-g54rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:19.878 [INFO][4585] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" Namespace="calico-system" Pod="calico-kube-controllers-ff5cc644f-g54rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.127 [INFO][4695] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" HandleID="k8s-pod-network.1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" Workload="localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.149 [INFO][4695] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" HandleID="k8s-pod-network.1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" Workload="localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000349830), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-ff5cc644f-g54rc", "timestamp":"2025-01-29 11:13:20.127237975 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.149 [INFO][4695] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.176 [INFO][4695] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.176 [INFO][4695] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.178 [INFO][4695] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" host="localhost" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.182 [INFO][4695] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.185 [INFO][4695] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.187 [INFO][4695] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.193 [INFO][4695] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.193 [INFO][4695] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" host="localhost" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.194 [INFO][4695] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.198 [INFO][4695] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" host="localhost" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.206 [INFO][4695] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" host="localhost" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.206 [INFO][4695] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" host="localhost" Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.206 [INFO][4695] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:13:20.230425 containerd[1547]: 2025-01-29 11:13:20.206 [INFO][4695] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" HandleID="k8s-pod-network.1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" Workload="localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0" Jan 29 11:13:20.230954 containerd[1547]: 2025-01-29 11:13:20.209 [INFO][4585] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" Namespace="calico-system" Pod="calico-kube-controllers-ff5cc644f-g54rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0", GenerateName:"calico-kube-controllers-ff5cc644f-", Namespace:"calico-system", SelfLink:"", UID:"9ff24cea-d161-45f7-b61c-4aeb38cb42ab", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ff5cc644f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-ff5cc644f-g54rc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd93a3bd12a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.230954 containerd[1547]: 2025-01-29 11:13:20.210 [INFO][4585] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" Namespace="calico-system" Pod="calico-kube-controllers-ff5cc644f-g54rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0" Jan 29 11:13:20.230954 containerd[1547]: 2025-01-29 11:13:20.210 [INFO][4585] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd93a3bd12a ContainerID="1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" Namespace="calico-system" Pod="calico-kube-controllers-ff5cc644f-g54rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0" Jan 29 11:13:20.230954 containerd[1547]: 2025-01-29 11:13:20.213 [INFO][4585] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" Namespace="calico-system" Pod="calico-kube-controllers-ff5cc644f-g54rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0" Jan 29 11:13:20.230954 containerd[1547]: 2025-01-29 11:13:20.217 [INFO][4585] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" Namespace="calico-system" Pod="calico-kube-controllers-ff5cc644f-g54rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0", GenerateName:"calico-kube-controllers-ff5cc644f-", Namespace:"calico-system", SelfLink:"", UID:"9ff24cea-d161-45f7-b61c-4aeb38cb42ab", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"ff5cc644f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe", Pod:"calico-kube-controllers-ff5cc644f-g54rc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd93a3bd12a", MAC:"c6:20:1b:ee:9e:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.230954 containerd[1547]: 2025-01-29 11:13:20.226 [INFO][4585] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe" Namespace="calico-system" Pod="calico-kube-controllers-ff5cc644f-g54rc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--ff5cc644f--g54rc-eth0" Jan 29 11:13:20.253115 containerd[1547]: time="2025-01-29T11:13:20.252850148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:20.253115 containerd[1547]: time="2025-01-29T11:13:20.252896868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:20.253115 containerd[1547]: time="2025-01-29T11:13:20.252907668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.253115 containerd[1547]: time="2025-01-29T11:13:20.252975508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.253899 systemd-networkd[1231]: cali09be6db4924: Link UP Jan 29 11:13:20.255107 systemd-networkd[1231]: cali09be6db4924: Gained carrier Jan 29 11:13:20.256420 containerd[1547]: time="2025-01-29T11:13:20.253759110Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:20.256420 containerd[1547]: time="2025-01-29T11:13:20.253800870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:20.256420 containerd[1547]: time="2025-01-29T11:13:20.253815430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.256420 containerd[1547]: time="2025-01-29T11:13:20.253892190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:19.853 [INFO][4617] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:19.879 [INFO][4617] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0 calico-apiserver-c6f7d4488- calico-apiserver 932cd32e-4633-4030-9143-1db1b953d083 775 0 2025-01-29 11:13:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c6f7d4488 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c6f7d4488-sk7nj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali09be6db4924 [] []}} ContainerID="c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-sk7nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:19.880 [INFO][4617] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-sk7nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.128 [INFO][4689] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" HandleID="k8s-pod-network.c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" Workload="localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.151 [INFO][4689] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" HandleID="k8s-pod-network.c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" Workload="localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000133780), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c6f7d4488-sk7nj", "timestamp":"2025-01-29 11:13:20.128668297 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.151 [INFO][4689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.206 [INFO][4689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.206 [INFO][4689] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.209 [INFO][4689] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" host="localhost" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.219 [INFO][4689] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.225 [INFO][4689] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.227 [INFO][4689] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.229 [INFO][4689] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.229 [INFO][4689] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" host="localhost" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.231 [INFO][4689] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.235 [INFO][4689] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" host="localhost" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.243 [INFO][4689] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" host="localhost" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.243 [INFO][4689] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" host="localhost" Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.243 [INFO][4689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:13:20.270623 containerd[1547]: 2025-01-29 11:13:20.243 [INFO][4689] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" HandleID="k8s-pod-network.c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" Workload="localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0" Jan 29 11:13:20.271356 containerd[1547]: 2025-01-29 11:13:20.248 [INFO][4617] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-sk7nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0", GenerateName:"calico-apiserver-c6f7d4488-", Namespace:"calico-apiserver", SelfLink:"", UID:"932cd32e-4633-4030-9143-1db1b953d083", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6f7d4488", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c6f7d4488-sk7nj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09be6db4924", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.271356 containerd[1547]: 2025-01-29 11:13:20.248 [INFO][4617] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-sk7nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0" Jan 29 11:13:20.271356 containerd[1547]: 2025-01-29 11:13:20.249 [INFO][4617] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09be6db4924 ContainerID="c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-sk7nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0" Jan 29 11:13:20.271356 containerd[1547]: 2025-01-29 11:13:20.256 [INFO][4617] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-sk7nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0" Jan 29 11:13:20.271356 containerd[1547]: 2025-01-29 11:13:20.256 [INFO][4617] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-sk7nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0", GenerateName:"calico-apiserver-c6f7d4488-", Namespace:"calico-apiserver", SelfLink:"", UID:"932cd32e-4633-4030-9143-1db1b953d083", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6f7d4488", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b", Pod:"calico-apiserver-c6f7d4488-sk7nj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali09be6db4924", MAC:"1a:27:a5:bd:e3:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.271356 containerd[1547]: 2025-01-29 11:13:20.267 [INFO][4617] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-sk7nj" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--sk7nj-eth0" Jan 29 11:13:20.285498 systemd-networkd[1231]: cali741747018b7: Link UP Jan 29 11:13:20.286246 systemd-networkd[1231]: cali741747018b7: Gained carrier Jan 29 11:13:20.294779 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:13:20.295867 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:19.780 [INFO][4596] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:19.871 [INFO][4596] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0 calico-apiserver-c6f7d4488- calico-apiserver 28a382ce-ce53-43b1-8797-59a017f4f410 774 0 2025-01-29 11:13:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c6f7d4488 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c6f7d4488-cn6bv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali741747018b7 [] []}} ContainerID="1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-cn6bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:19.872 [INFO][4596] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-cn6bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.130 [INFO][4669] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" HandleID="k8s-pod-network.1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" Workload="localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.154 [INFO][4669] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" HandleID="k8s-pod-network.1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" Workload="localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003c09b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c6f7d4488-cn6bv", "timestamp":"2025-01-29 11:13:20.129987939 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.154 [INFO][4669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.243 [INFO][4669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.243 [INFO][4669] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.246 [INFO][4669] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" host="localhost" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.250 [INFO][4669] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.259 [INFO][4669] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.262 [INFO][4669] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.267 [INFO][4669] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.267 [INFO][4669] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" host="localhost" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.269 [INFO][4669] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418 Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.272 [INFO][4669] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" host="localhost" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.278 [INFO][4669] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" host="localhost" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.278 [INFO][4669] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" host="localhost" Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.278 [INFO][4669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:13:20.301229 containerd[1547]: 2025-01-29 11:13:20.279 [INFO][4669] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" HandleID="k8s-pod-network.1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" Workload="localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0" Jan 29 11:13:20.301954 containerd[1547]: 2025-01-29 11:13:20.282 [INFO][4596] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-cn6bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0", GenerateName:"calico-apiserver-c6f7d4488-", Namespace:"calico-apiserver", SelfLink:"", UID:"28a382ce-ce53-43b1-8797-59a017f4f410", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6f7d4488", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c6f7d4488-cn6bv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali741747018b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.301954 containerd[1547]: 2025-01-29 11:13:20.282 [INFO][4596] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-cn6bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0" Jan 29 11:13:20.301954 containerd[1547]: 2025-01-29 11:13:20.282 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali741747018b7 ContainerID="1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-cn6bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0" Jan 29 11:13:20.301954 containerd[1547]: 2025-01-29 11:13:20.286 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-cn6bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0" Jan 29 11:13:20.301954 containerd[1547]: 2025-01-29 11:13:20.287 [INFO][4596] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-cn6bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0", GenerateName:"calico-apiserver-c6f7d4488-", Namespace:"calico-apiserver", SelfLink:"", UID:"28a382ce-ce53-43b1-8797-59a017f4f410", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c6f7d4488", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418", Pod:"calico-apiserver-c6f7d4488-cn6bv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali741747018b7", MAC:"1e:69:6d:09:8e:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.301954 containerd[1547]: 2025-01-29 11:13:20.297 [INFO][4596] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418" Namespace="calico-apiserver" Pod="calico-apiserver-c6f7d4488-cn6bv" WorkloadEndpoint="localhost-k8s-calico--apiserver--c6f7d4488--cn6bv-eth0" Jan 29 11:13:20.316313 containerd[1547]: time="2025-01-29T11:13:20.316171376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:20.316313 containerd[1547]: time="2025-01-29T11:13:20.316236776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:20.316313 containerd[1547]: time="2025-01-29T11:13:20.316261776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.316788 containerd[1547]: time="2025-01-29T11:13:20.316342096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.331380 containerd[1547]: time="2025-01-29T11:13:20.331324121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-ff5cc644f-g54rc,Uid:9ff24cea-d161-45f7-b61c-4aeb38cb42ab,Namespace:calico-system,Attempt:4,} returns sandbox id \"1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe\"" Jan 29 11:13:20.333293 containerd[1547]: time="2025-01-29T11:13:20.333116404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qzp2w,Uid:189a793a-2e8c-4371-8403-accce88972f2,Namespace:kube-system,Attempt:4,} returns sandbox id \"7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6\"" Jan 29 11:13:20.338137 systemd-networkd[1231]: cali6d70974e5bf: Link UP Jan 29 11:13:20.339222 systemd-networkd[1231]: cali6d70974e5bf: Gained carrier Jan 29 11:13:20.342667 containerd[1547]: time="2025-01-29T11:13:20.342194620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:20.342667 containerd[1547]: time="2025-01-29T11:13:20.342262940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:20.342667 containerd[1547]: time="2025-01-29T11:13:20.342279740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.342667 containerd[1547]: time="2025-01-29T11:13:20.342367460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.343264 kubelet[2768]: E0129 11:13:20.343128 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:20.346323 containerd[1547]: time="2025-01-29T11:13:20.346033906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:19.833 [INFO][4619] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:19.875 [INFO][4619] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--whhjq-eth0 csi-node-driver- calico-system e5725830-f2eb-461f-bca5-bd9a3c65abd6 656 0 2025-01-29 11:13:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-whhjq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6d70974e5bf [] []}} ContainerID="68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" Namespace="calico-system" Pod="csi-node-driver-whhjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--whhjq-" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:19.876 [INFO][4619] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" Namespace="calico-system" Pod="csi-node-driver-whhjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--whhjq-eth0" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.129 [INFO][4679] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" HandleID="k8s-pod-network.68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" Workload="localhost-k8s-csi--node--driver--whhjq-eth0" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.155 [INFO][4679] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" HandleID="k8s-pod-network.68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" Workload="localhost-k8s-csi--node--driver--whhjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000118580), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-whhjq", "timestamp":"2025-01-29 11:13:20.129567859 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.156 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.279 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.279 [INFO][4679] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.283 [INFO][4679] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" host="localhost" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.291 [INFO][4679] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.299 [INFO][4679] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.302 [INFO][4679] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.305 [INFO][4679] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.305 [INFO][4679] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" host="localhost" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.309 [INFO][4679] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8 Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.319 [INFO][4679] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" host="localhost" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.328 [INFO][4679] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" host="localhost" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.328 [INFO][4679] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" host="localhost" Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.328 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:13:20.353308 containerd[1547]: 2025-01-29 11:13:20.328 [INFO][4679] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" HandleID="k8s-pod-network.68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" Workload="localhost-k8s-csi--node--driver--whhjq-eth0" Jan 29 11:13:20.354030 containerd[1547]: 2025-01-29 11:13:20.331 [INFO][4619] cni-plugin/k8s.go 386: Populated endpoint ContainerID="68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" Namespace="calico-system" Pod="csi-node-driver-whhjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--whhjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--whhjq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5725830-f2eb-461f-bca5-bd9a3c65abd6", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-whhjq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6d70974e5bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.354030 containerd[1547]: 2025-01-29 11:13:20.332 [INFO][4619] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" Namespace="calico-system" Pod="csi-node-driver-whhjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--whhjq-eth0" Jan 29 11:13:20.354030 containerd[1547]: 2025-01-29 11:13:20.333 [INFO][4619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d70974e5bf ContainerID="68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" Namespace="calico-system" Pod="csi-node-driver-whhjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--whhjq-eth0" Jan 29 11:13:20.354030 containerd[1547]: 2025-01-29 11:13:20.335 [INFO][4619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" Namespace="calico-system" Pod="csi-node-driver-whhjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--whhjq-eth0" Jan 29 11:13:20.354030 containerd[1547]: 2025-01-29 11:13:20.336 [INFO][4619] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" Namespace="calico-system" Pod="csi-node-driver-whhjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--whhjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--whhjq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5725830-f2eb-461f-bca5-bd9a3c65abd6", ResourceVersion:"656", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8", Pod:"csi-node-driver-whhjq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6d70974e5bf", MAC:"06:c4:98:27:98:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.354030 containerd[1547]: 2025-01-29 11:13:20.347 [INFO][4619] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8" Namespace="calico-system" Pod="csi-node-driver-whhjq" WorkloadEndpoint="localhost-k8s-csi--node--driver--whhjq-eth0" Jan 29 11:13:20.366319 containerd[1547]: time="2025-01-29T11:13:20.366269821Z" level=info msg="CreateContainer within sandbox \"7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:13:20.371395 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:13:20.383392 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:13:20.384398 containerd[1547]: time="2025-01-29T11:13:20.384120291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:20.384398 containerd[1547]: time="2025-01-29T11:13:20.384193051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:20.384398 containerd[1547]: time="2025-01-29T11:13:20.384204531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.384398 containerd[1547]: time="2025-01-29T11:13:20.384311011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.392668 containerd[1547]: time="2025-01-29T11:13:20.391451143Z" level=info msg="CreateContainer within sandbox \"7ab1c68379fa0f7b4d921ee9664ce180cebaa65aff9b73e30f0a6b4c46c0f4c6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d6188361659afec83d81065b58131e50a64757f2ca48a79cf89e83dc16c5a6ac\"" Jan 29 11:13:20.392668 containerd[1547]: time="2025-01-29T11:13:20.392175225Z" level=info msg="StartContainer for \"d6188361659afec83d81065b58131e50a64757f2ca48a79cf89e83dc16c5a6ac\"" Jan 29 11:13:20.394385 systemd-networkd[1231]: cali5c41fa830c3: Link UP Jan 29 11:13:20.395150 systemd-networkd[1231]: cali5c41fa830c3: Gained carrier Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:19.833 [INFO][4630] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:19.872 [INFO][4630] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--w884t-eth0 coredns-7db6d8ff4d- kube-system a0701364-f7fc-4c5f-95e0-131b1446a6f8 772 0 2025-01-29 11:13:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-w884t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5c41fa830c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w884t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w884t-" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:19.873 [INFO][4630] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w884t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w884t-eth0" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.133 [INFO][4670] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" HandleID="k8s-pod-network.ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" Workload="localhost-k8s-coredns--7db6d8ff4d--w884t-eth0" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.157 [INFO][4670] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" HandleID="k8s-pod-network.ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" Workload="localhost-k8s-coredns--7db6d8ff4d--w884t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400040d360), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-w884t", "timestamp":"2025-01-29 11:13:20.130962821 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.157 [INFO][4670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.329 [INFO][4670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.329 [INFO][4670] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.332 [INFO][4670] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" host="localhost" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.345 [INFO][4670] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.353 [INFO][4670] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.356 [INFO][4670] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.359 [INFO][4670] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.360 [INFO][4670] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" host="localhost" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.363 [INFO][4670] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.370 [INFO][4670] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" host="localhost" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.380 [INFO][4670] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" host="localhost" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.380 [INFO][4670] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" host="localhost" Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.380 [INFO][4670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:13:20.410676 containerd[1547]: 2025-01-29 11:13:20.380 [INFO][4670] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" HandleID="k8s-pod-network.ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" Workload="localhost-k8s-coredns--7db6d8ff4d--w884t-eth0" Jan 29 11:13:20.411192 containerd[1547]: 2025-01-29 11:13:20.386 [INFO][4630] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w884t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w884t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--w884t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a0701364-f7fc-4c5f-95e0-131b1446a6f8", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-w884t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c41fa830c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.411192 containerd[1547]: 2025-01-29 11:13:20.386 [INFO][4630] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w884t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w884t-eth0" Jan 29 11:13:20.411192 containerd[1547]: 2025-01-29 11:13:20.389 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c41fa830c3 ContainerID="ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w884t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w884t-eth0" Jan 29 11:13:20.411192 containerd[1547]: 2025-01-29 11:13:20.395 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w884t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w884t-eth0" Jan 29 11:13:20.411192 containerd[1547]: 2025-01-29 11:13:20.395 [INFO][4630] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w884t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w884t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--w884t-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a0701364-f7fc-4c5f-95e0-131b1446a6f8", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b", Pod:"coredns-7db6d8ff4d-w884t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c41fa830c3", MAC:"2a:a9:3b:8c:29:d5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:13:20.411192 containerd[1547]: 2025-01-29 11:13:20.405 [INFO][4630] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-w884t" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--w884t-eth0" Jan 29 11:13:20.418824 containerd[1547]: time="2025-01-29T11:13:20.418787110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-sk7nj,Uid:932cd32e-4633-4030-9143-1db1b953d083,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b\"" Jan 29 11:13:20.425302 containerd[1547]: time="2025-01-29T11:13:20.425191281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c6f7d4488-cn6bv,Uid:28a382ce-ce53-43b1-8797-59a017f4f410,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418\"" Jan 29 11:13:20.440724 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:13:20.453322 containerd[1547]: time="2025-01-29T11:13:20.453039568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:13:20.453322 containerd[1547]: time="2025-01-29T11:13:20.453124928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:13:20.453322 containerd[1547]: time="2025-01-29T11:13:20.453142008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.453322 containerd[1547]: time="2025-01-29T11:13:20.453243448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:13:20.459522 containerd[1547]: time="2025-01-29T11:13:20.459469219Z" level=info msg="StartContainer for \"d6188361659afec83d81065b58131e50a64757f2ca48a79cf89e83dc16c5a6ac\" returns successfully" Jan 29 11:13:20.481397 containerd[1547]: time="2025-01-29T11:13:20.481299216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-whhjq,Uid:e5725830-f2eb-461f-bca5-bd9a3c65abd6,Namespace:calico-system,Attempt:4,} returns sandbox id \"68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8\"" Jan 29 11:13:20.488146 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:13:20.512162 systemd[1]: run-netns-cni\x2de049f9c1\x2d61f1\x2d2460\x2d4106\x2dca3d00bd9916.mount: Deactivated successfully. Jan 29 11:13:20.512302 systemd[1]: run-netns-cni\x2dd8e4ecfe\x2d4bbe\x2d04ec\x2d5a45\x2d66a4c5bda6e9.mount: Deactivated successfully. Jan 29 11:13:20.519267 containerd[1547]: time="2025-01-29T11:13:20.519118000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-w884t,Uid:a0701364-f7fc-4c5f-95e0-131b1446a6f8,Namespace:kube-system,Attempt:4,} returns sandbox id \"ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b\"" Jan 29 11:13:20.520401 kubelet[2768]: E0129 11:13:20.519975 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:20.523287 containerd[1547]: time="2025-01-29T11:13:20.523216927Z" level=info msg="CreateContainer within sandbox \"ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:13:20.540005 containerd[1547]: time="2025-01-29T11:13:20.539268594Z" level=info msg="CreateContainer within sandbox \"ef28f4fbcda1ad0af2be2511bcb0a1bb124dc24cab84385a9ddb7da83befd70b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5acb72c88340e4a51f8d0c4500731c103a383563991f261bf7a2f2682ac2257e\"" Jan 29 11:13:20.540775 containerd[1547]: time="2025-01-29T11:13:20.540741837Z" level=info msg="StartContainer for \"5acb72c88340e4a51f8d0c4500731c103a383563991f261bf7a2f2682ac2257e\"" Jan 29 11:13:20.603387 containerd[1547]: time="2025-01-29T11:13:20.602140941Z" level=info msg="StartContainer for \"5acb72c88340e4a51f8d0c4500731c103a383563991f261bf7a2f2682ac2257e\" returns successfully" Jan 29 11:13:20.742095 kubelet[2768]: E0129 11:13:20.741054 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:20.759098 kubelet[2768]: E0129 11:13:20.759059 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:20.763204 kubelet[2768]: I0129 11:13:20.763184 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:13:20.764149 kubelet[2768]: E0129 11:13:20.764083 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:20.800284 kubelet[2768]: I0129 11:13:20.799811 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qzp2w" podStartSLOduration=19.799794317 podStartE2EDuration="19.799794317s" podCreationTimestamp="2025-01-29 11:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:13:20.788616138 +0000 UTC m=+36.365997193" watchObservedRunningTime="2025-01-29 11:13:20.799794317 +0000 UTC m=+36.377175292" Jan 29 11:13:20.818012 kubelet[2768]: I0129 11:13:20.817901 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-w884t" podStartSLOduration=19.817883267 podStartE2EDuration="19.817883267s" podCreationTimestamp="2025-01-29 11:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:13:20.80143468 +0000 UTC m=+36.378815695" watchObservedRunningTime="2025-01-29 11:13:20.817883267 +0000 UTC m=+36.395264282" Jan 29 11:13:21.495752 containerd[1547]: time="2025-01-29T11:13:21.495689034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 11:13:21.499720 containerd[1547]: time="2025-01-29T11:13:21.499683400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.153226533s" Jan 29 11:13:21.499720 containerd[1547]: time="2025-01-29T11:13:21.499721480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 11:13:21.501725 containerd[1547]: time="2025-01-29T11:13:21.501694524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:13:21.503604 containerd[1547]: time="2025-01-29T11:13:21.503557367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:21.506432 containerd[1547]: time="2025-01-29T11:13:21.504335488Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:21.506432 containerd[1547]: time="2025-01-29T11:13:21.504914329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:21.511022 containerd[1547]: time="2025-01-29T11:13:21.510970859Z" level=info msg="CreateContainer within sandbox \"1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:13:21.523521 containerd[1547]: time="2025-01-29T11:13:21.523477000Z" level=info msg="CreateContainer within sandbox \"1cb8fe61a8bf837bd6ccfdc6a072cc9c9190b16e10037c1fcf95e067dd40e0fe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a67a6eb4fba7f9e29f0cc942c05a67ef478f5b1fbc3a55ccf7bb326e803e8132\"" Jan 29 11:13:21.524174 containerd[1547]: time="2025-01-29T11:13:21.524134921Z" level=info msg="StartContainer for \"a67a6eb4fba7f9e29f0cc942c05a67ef478f5b1fbc3a55ccf7bb326e803e8132\"" Jan 29 11:13:21.537682 systemd-networkd[1231]: cali5c41fa830c3: Gained IPv6LL Jan 29 11:13:21.583877 containerd[1547]: time="2025-01-29T11:13:21.583821459Z" level=info msg="StartContainer for \"a67a6eb4fba7f9e29f0cc942c05a67ef478f5b1fbc3a55ccf7bb326e803e8132\" returns successfully" Jan 29 11:13:21.602530 systemd-networkd[1231]: calibd93a3bd12a: Gained IPv6LL Jan 29 11:13:21.767992 kubelet[2768]: E0129 11:13:21.767863 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:21.769173 kubelet[2768]: E0129 11:13:21.768913 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:21.780983 kubelet[2768]: I0129 11:13:21.780761 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-ff5cc644f-g54rc" podStartSLOduration=11.614764709 podStartE2EDuration="12.780745544s" podCreationTimestamp="2025-01-29 11:13:09 +0000 UTC" firstStartedPulling="2025-01-29 11:13:20.334395646 +0000 UTC m=+35.911776661" lastFinishedPulling="2025-01-29 11:13:21.500376521 +0000 UTC m=+37.077757496" observedRunningTime="2025-01-29 11:13:21.780024702 +0000 UTC m=+37.357405717" watchObservedRunningTime="2025-01-29 11:13:21.780745544 +0000 UTC m=+37.358126559" Jan 29 11:13:21.921547 systemd-networkd[1231]: cali741747018b7: Gained IPv6LL Jan 29 11:13:21.985526 systemd-networkd[1231]: calib4e3c4d281c: Gained IPv6LL Jan 29 11:13:22.113552 systemd-networkd[1231]: cali09be6db4924: Gained IPv6LL Jan 29 11:13:22.245590 systemd-networkd[1231]: cali6d70974e5bf: Gained IPv6LL Jan 29 11:13:22.686641 containerd[1547]: time="2025-01-29T11:13:22.686598684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:22.687547 containerd[1547]: time="2025-01-29T11:13:22.687393126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 11:13:22.688232 containerd[1547]: time="2025-01-29T11:13:22.688193967Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:22.690295 containerd[1547]: time="2025-01-29T11:13:22.690242530Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:22.691437 containerd[1547]: time="2025-01-29T11:13:22.691005571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.189269887s" Jan 29 11:13:22.691437 containerd[1547]: time="2025-01-29T11:13:22.691040411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 11:13:22.691870 containerd[1547]: time="2025-01-29T11:13:22.691841933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:13:22.692967 containerd[1547]: time="2025-01-29T11:13:22.692877374Z" level=info msg="CreateContainer within sandbox \"c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:13:22.704443 containerd[1547]: time="2025-01-29T11:13:22.704392433Z" level=info msg="CreateContainer within sandbox \"c4081c3af7d6a33efcfe8309d7287ad596a72ff6c67f8bc77c24d51389159a2b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0cb0c8470aa95884b8cb854739bce0374bb249393a1d4c3399b045a31b8dbc5f\"" Jan 29 11:13:22.704796 containerd[1547]: time="2025-01-29T11:13:22.704771873Z" level=info msg="StartContainer for \"0cb0c8470aa95884b8cb854739bce0374bb249393a1d4c3399b045a31b8dbc5f\"" Jan 29 11:13:22.732495 kubelet[2768]: I0129 11:13:22.732325 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:13:22.733265 kubelet[2768]: E0129 11:13:22.733115 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:22.774990 kubelet[2768]: E0129 11:13:22.773780 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:22.774990 kubelet[2768]: E0129 11:13:22.774517 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:22.789799 containerd[1547]: time="2025-01-29T11:13:22.789752050Z" level=info msg="StartContainer for \"0cb0c8470aa95884b8cb854739bce0374bb249393a1d4c3399b045a31b8dbc5f\" returns successfully" Jan 29 11:13:22.987982 containerd[1547]: time="2025-01-29T11:13:22.987871567Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:22.988634 containerd[1547]: time="2025-01-29T11:13:22.988579208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:13:22.992187 containerd[1547]: time="2025-01-29T11:13:22.991745693Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 299.87196ms" Jan 29 11:13:22.992187 containerd[1547]: time="2025-01-29T11:13:22.991779133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 11:13:22.994103 containerd[1547]: time="2025-01-29T11:13:22.993678976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:13:22.994505 containerd[1547]: time="2025-01-29T11:13:22.994473137Z" level=info msg="CreateContainer within sandbox \"1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:13:23.006744 containerd[1547]: time="2025-01-29T11:13:23.006702157Z" level=info msg="CreateContainer within sandbox \"1e2ba88ab22ca96eeece8e1bc093a376c1154ebcadb4c3b5fb63632cb07f1418\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"70cfbfff8f09ad01e95c9a3d226e8ffadd3d6da8ee3f1bae5272496f25dc9d77\"" Jan 29 11:13:23.007795 containerd[1547]: time="2025-01-29T11:13:23.007770758Z" level=info msg="StartContainer for \"70cfbfff8f09ad01e95c9a3d226e8ffadd3d6da8ee3f1bae5272496f25dc9d77\"" Jan 29 11:13:23.060394 containerd[1547]: time="2025-01-29T11:13:23.060332520Z" level=info msg="StartContainer for \"70cfbfff8f09ad01e95c9a3d226e8ffadd3d6da8ee3f1bae5272496f25dc9d77\" returns successfully" Jan 29 11:13:23.282607 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:49046.service - OpenSSH per-connection server daemon (10.0.0.1:49046). Jan 29 11:13:23.332846 sshd[5445]: Accepted publickey for core from 10.0.0.1 port 49046 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:23.334190 sshd-session[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:23.339488 systemd-logind[1528]: New session 10 of user core. Jan 29 11:13:23.344646 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:13:23.519617 sshd[5448]: Connection closed by 10.0.0.1 port 49046 Jan 29 11:13:23.520631 sshd-session[5445]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:23.524879 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:49056.service - OpenSSH per-connection server daemon (10.0.0.1:49056). Jan 29 11:13:23.527546 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:49046.service: Deactivated successfully. Jan 29 11:13:23.536600 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:13:23.540570 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:13:23.541778 systemd-logind[1528]: Removed session 10. Jan 29 11:13:23.580664 sshd[5459]: Accepted publickey for core from 10.0.0.1 port 49056 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:23.581755 sshd-session[5459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:23.585213 systemd-logind[1528]: New session 11 of user core. Jan 29 11:13:23.590674 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:13:23.807959 kubelet[2768]: I0129 11:13:23.806106 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c6f7d4488-sk7nj" podStartSLOduration=14.531191096 podStartE2EDuration="16.802852837s" podCreationTimestamp="2025-01-29 11:13:07 +0000 UTC" firstStartedPulling="2025-01-29 11:13:20.420019392 +0000 UTC m=+35.997400407" lastFinishedPulling="2025-01-29 11:13:22.691681133 +0000 UTC m=+38.269062148" observedRunningTime="2025-01-29 11:13:23.801726715 +0000 UTC m=+39.379107730" watchObservedRunningTime="2025-01-29 11:13:23.802852837 +0000 UTC m=+39.380233852" Jan 29 11:13:23.826336 sshd[5465]: Connection closed by 10.0.0.1 port 49056 Jan 29 11:13:23.825003 sshd-session[5459]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:23.839703 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:49062.service - OpenSSH per-connection server daemon (10.0.0.1:49062). Jan 29 11:13:23.840140 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:49056.service: Deactivated successfully. Jan 29 11:13:23.841614 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:13:23.846490 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:13:23.854473 systemd-logind[1528]: Removed session 11. Jan 29 11:13:23.898748 sshd[5481]: Accepted publickey for core from 10.0.0.1 port 49062 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:23.901183 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:23.918139 systemd-logind[1528]: New session 12 of user core. Jan 29 11:13:23.926476 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:13:24.073875 containerd[1547]: time="2025-01-29T11:13:24.073454215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:24.076869 containerd[1547]: time="2025-01-29T11:13:24.075822219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 11:13:24.079155 containerd[1547]: time="2025-01-29T11:13:24.078350502Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:24.083797 containerd[1547]: time="2025-01-29T11:13:24.083758231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:24.085079 containerd[1547]: time="2025-01-29T11:13:24.084953552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.091242056s" Jan 29 11:13:24.085079 containerd[1547]: time="2025-01-29T11:13:24.084993392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 11:13:24.089565 containerd[1547]: time="2025-01-29T11:13:24.088922838Z" level=info msg="CreateContainer within sandbox \"68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:13:24.126861 containerd[1547]: time="2025-01-29T11:13:24.124096212Z" level=info msg="CreateContainer within sandbox \"68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2e31f54f48e70bcc031592548bf006315fa51dd355d5169ea366fa192c30a883\"" Jan 29 11:13:24.127084 containerd[1547]: time="2025-01-29T11:13:24.127055376Z" level=info msg="StartContainer for \"2e31f54f48e70bcc031592548bf006315fa51dd355d5169ea366fa192c30a883\"" Jan 29 11:13:24.236956 containerd[1547]: time="2025-01-29T11:13:24.236916383Z" level=info msg="StartContainer for \"2e31f54f48e70bcc031592548bf006315fa51dd355d5169ea366fa192c30a883\" returns successfully" Jan 29 11:13:24.240376 containerd[1547]: time="2025-01-29T11:13:24.240332948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:13:24.299102 sshd[5488]: Connection closed by 10.0.0.1 port 49062 Jan 29 11:13:24.299582 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:24.303558 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:49062.service: Deactivated successfully. Jan 29 11:13:24.306973 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:13:24.307052 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:13:24.310448 systemd-logind[1528]: Removed session 12. Jan 29 11:13:24.810747 kubelet[2768]: I0129 11:13:24.810720 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:13:24.830863 kubelet[2768]: I0129 11:13:24.830615 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c6f7d4488-cn6bv" podStartSLOduration=14.265326233 podStartE2EDuration="16.830598843s" podCreationTimestamp="2025-01-29 11:13:08 +0000 UTC" firstStartedPulling="2025-01-29 11:13:20.427849245 +0000 UTC m=+36.005230220" lastFinishedPulling="2025-01-29 11:13:22.993121815 +0000 UTC m=+38.570502830" observedRunningTime="2025-01-29 11:13:23.819964463 +0000 UTC m=+39.397345478" watchObservedRunningTime="2025-01-29 11:13:24.830598843 +0000 UTC m=+40.407979898" Jan 29 11:13:25.136268 containerd[1547]: time="2025-01-29T11:13:25.136203901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:25.137901 containerd[1547]: time="2025-01-29T11:13:25.137860823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 11:13:25.139440 containerd[1547]: time="2025-01-29T11:13:25.138974225Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:25.141023 containerd[1547]: time="2025-01-29T11:13:25.140971628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:13:25.141783 containerd[1547]: time="2025-01-29T11:13:25.141666149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 901.288561ms" Jan 29 11:13:25.141783 containerd[1547]: time="2025-01-29T11:13:25.141694349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 11:13:25.144111 containerd[1547]: time="2025-01-29T11:13:25.144081153Z" level=info msg="CreateContainer within sandbox \"68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:13:25.153534 containerd[1547]: time="2025-01-29T11:13:25.153444246Z" level=info msg="CreateContainer within sandbox \"68b2e151e4c50cf1d2df044cfb47cf959e5b20223cefef2869c9819087d950f8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9f8a447652bd681d17bf8a95bc98bb86ce4b32a603ba5a3fad8bd5f6993123b3\"" Jan 29 11:13:25.156122 containerd[1547]: time="2025-01-29T11:13:25.156093690Z" level=info msg="StartContainer for \"9f8a447652bd681d17bf8a95bc98bb86ce4b32a603ba5a3fad8bd5f6993123b3\"" Jan 29 11:13:25.219392 containerd[1547]: time="2025-01-29T11:13:25.219350104Z" level=info msg="StartContainer for \"9f8a447652bd681d17bf8a95bc98bb86ce4b32a603ba5a3fad8bd5f6993123b3\" returns successfully" Jan 29 11:13:25.591777 kubelet[2768]: I0129 11:13:25.591711 2768 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:13:25.593540 kubelet[2768]: I0129 11:13:25.593514 2768 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:13:25.828819 kubelet[2768]: I0129 11:13:25.828750 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-whhjq" podStartSLOduration=12.168658231 podStartE2EDuration="16.828732324s" podCreationTimestamp="2025-01-29 11:13:09 +0000 UTC" firstStartedPulling="2025-01-29 11:13:20.482673138 +0000 UTC m=+36.060054153" lastFinishedPulling="2025-01-29 11:13:25.142747231 +0000 UTC m=+40.720128246" observedRunningTime="2025-01-29 11:13:25.827675643 +0000 UTC m=+41.405056698" watchObservedRunningTime="2025-01-29 11:13:25.828732324 +0000 UTC m=+41.406113339" Jan 29 11:13:29.310624 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:49074.service - OpenSSH per-connection server daemon (10.0.0.1:49074). Jan 29 11:13:29.357200 sshd[5712]: Accepted publickey for core from 10.0.0.1 port 49074 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:29.358575 sshd-session[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:29.362267 systemd-logind[1528]: New session 13 of user core. Jan 29 11:13:29.373688 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:13:29.552503 sshd[5719]: Connection closed by 10.0.0.1 port 49074 Jan 29 11:13:29.552844 sshd-session[5712]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:29.563843 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:49084.service - OpenSSH per-connection server daemon (10.0.0.1:49084). Jan 29 11:13:29.564229 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:49074.service: Deactivated successfully. Jan 29 11:13:29.566662 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:13:29.568770 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:13:29.570543 systemd-logind[1528]: Removed session 13. Jan 29 11:13:29.599738 sshd[5732]: Accepted publickey for core from 10.0.0.1 port 49084 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:29.600808 sshd-session[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:29.604548 systemd-logind[1528]: New session 14 of user core. Jan 29 11:13:29.614668 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:13:29.869574 sshd[5738]: Connection closed by 10.0.0.1 port 49084 Jan 29 11:13:29.870101 sshd-session[5732]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:29.881634 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:49100.service - OpenSSH per-connection server daemon (10.0.0.1:49100). Jan 29 11:13:29.882417 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:49084.service: Deactivated successfully. Jan 29 11:13:29.885610 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:13:29.885908 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:13:29.890686 systemd-logind[1528]: Removed session 14. Jan 29 11:13:29.931193 sshd[5747]: Accepted publickey for core from 10.0.0.1 port 49100 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:29.931786 sshd-session[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:29.936666 systemd-logind[1528]: New session 15 of user core. Jan 29 11:13:29.947698 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:13:30.981326 kubelet[2768]: I0129 11:13:30.981200 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:13:30.982880 kubelet[2768]: E0129 11:13:30.982601 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:31.506289 sshd[5754]: Connection closed by 10.0.0.1 port 49100 Jan 29 11:13:31.507185 sshd-session[5747]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:31.522885 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:49106.service - OpenSSH per-connection server daemon (10.0.0.1:49106). Jan 29 11:13:31.523864 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:49100.service: Deactivated successfully. Jan 29 11:13:31.531060 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:13:31.535885 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:13:31.542732 systemd-logind[1528]: Removed session 15. Jan 29 11:13:31.567511 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 49106 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:31.568858 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:31.574756 systemd-logind[1528]: New session 16 of user core. Jan 29 11:13:31.582831 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:13:31.837094 kubelet[2768]: E0129 11:13:31.837050 2768 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:13:31.901461 sshd[5843]: Connection closed by 10.0.0.1 port 49106 Jan 29 11:13:31.903337 sshd-session[5830]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:31.913749 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:49114.service - OpenSSH per-connection server daemon (10.0.0.1:49114). Jan 29 11:13:31.914628 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:49106.service: Deactivated successfully. Jan 29 11:13:31.916565 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:13:31.918875 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:13:31.919798 systemd-logind[1528]: Removed session 16. Jan 29 11:13:31.961765 sshd[5852]: Accepted publickey for core from 10.0.0.1 port 49114 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:31.963086 sshd-session[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:31.967981 systemd-logind[1528]: New session 17 of user core. Jan 29 11:13:31.971840 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:13:32.097175 kernel: bpftool[5886]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:13:32.132842 sshd[5860]: Connection closed by 10.0.0.1 port 49114 Jan 29 11:13:32.133171 sshd-session[5852]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:32.136913 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:49114.service: Deactivated successfully. Jan 29 11:13:32.139518 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:13:32.139703 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:13:32.140591 systemd-logind[1528]: Removed session 17. Jan 29 11:13:32.265778 systemd-networkd[1231]: vxlan.calico: Link UP Jan 29 11:13:32.265787 systemd-networkd[1231]: vxlan.calico: Gained carrier Jan 29 11:13:33.889531 systemd-networkd[1231]: vxlan.calico: Gained IPv6LL Jan 29 11:13:37.152661 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:44668.service - OpenSSH per-connection server daemon (10.0.0.1:44668). Jan 29 11:13:37.196841 sshd[5974]: Accepted publickey for core from 10.0.0.1 port 44668 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:37.197975 sshd-session[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:37.201610 systemd-logind[1528]: New session 18 of user core. Jan 29 11:13:37.209763 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:13:37.360164 sshd[5977]: Connection closed by 10.0.0.1 port 44668 Jan 29 11:13:37.361612 sshd-session[5974]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:37.364210 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:44668.service: Deactivated successfully. Jan 29 11:13:37.367004 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:13:37.367355 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:13:37.368294 systemd-logind[1528]: Removed session 18. Jan 29 11:13:37.443877 kubelet[2768]: I0129 11:13:37.443756 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:13:42.372635 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:44678.service - OpenSSH per-connection server daemon (10.0.0.1:44678). Jan 29 11:13:42.413431 sshd[6002]: Accepted publickey for core from 10.0.0.1 port 44678 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:42.413992 sshd-session[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:42.417397 systemd-logind[1528]: New session 19 of user core. Jan 29 11:13:42.428311 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:13:42.577902 sshd[6005]: Connection closed by 10.0.0.1 port 44678 Jan 29 11:13:42.578272 sshd-session[6002]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:42.580907 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:44678.service: Deactivated successfully. Jan 29 11:13:42.584558 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:13:42.584800 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:13:42.586157 systemd-logind[1528]: Removed session 19. Jan 29 11:13:44.501076 containerd[1547]: time="2025-01-29T11:13:44.500981366Z" level=info msg="StopPodSandbox for \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\"" Jan 29 11:13:44.501520 containerd[1547]: time="2025-01-29T11:13:44.501087206Z" level=info msg="TearDown network for sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" successfully" Jan 29 11:13:44.501520 containerd[1547]: time="2025-01-29T11:13:44.501098686Z" level=info msg="StopPodSandbox for \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" returns successfully" Jan 29 11:13:44.501700 containerd[1547]: time="2025-01-29T11:13:44.501657927Z" level=info msg="RemovePodSandbox for \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\"" Jan 29 11:13:44.502015 containerd[1547]: time="2025-01-29T11:13:44.501977847Z" level=info msg="Forcibly stopping sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\"" Jan 29 11:13:44.502092 containerd[1547]: time="2025-01-29T11:13:44.502075167Z" level=info msg="TearDown network for sandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" successfully" Jan 29 11:13:44.507864 containerd[1547]: time="2025-01-29T11:13:44.507821973Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.508852 containerd[1547]: time="2025-01-29T11:13:44.508707734Z" level=info msg="RemovePodSandbox \"24cff775b4c3e9b32d8979d946d103e4965692f8367c31164dce9b066f783b86\" returns successfully" Jan 29 11:13:44.509315 containerd[1547]: time="2025-01-29T11:13:44.509197615Z" level=info msg="StopPodSandbox for \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\"" Jan 29 11:13:44.509391 containerd[1547]: time="2025-01-29T11:13:44.509374575Z" level=info msg="TearDown network for sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\" successfully" Jan 29 11:13:44.509435 containerd[1547]: time="2025-01-29T11:13:44.509390335Z" level=info msg="StopPodSandbox for \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\" returns successfully" Jan 29 11:13:44.510694 containerd[1547]: time="2025-01-29T11:13:44.509656655Z" level=info msg="RemovePodSandbox for \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\"" Jan 29 11:13:44.510694 containerd[1547]: time="2025-01-29T11:13:44.509683535Z" level=info msg="Forcibly stopping sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\"" Jan 29 11:13:44.510694 containerd[1547]: time="2025-01-29T11:13:44.509744375Z" level=info msg="TearDown network for sandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\" successfully" Jan 29 11:13:44.519387 containerd[1547]: time="2025-01-29T11:13:44.519359026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.519526 containerd[1547]: time="2025-01-29T11:13:44.519508146Z" level=info msg="RemovePodSandbox \"468747847107e193b4993eace04ca7468b3fd2d3065d71d67c254f92d3be3703\" returns successfully" Jan 29 11:13:44.519895 containerd[1547]: time="2025-01-29T11:13:44.519873706Z" level=info msg="StopPodSandbox for \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\"" Jan 29 11:13:44.520093 containerd[1547]: time="2025-01-29T11:13:44.520075746Z" level=info msg="TearDown network for sandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\" successfully" Jan 29 11:13:44.520157 containerd[1547]: time="2025-01-29T11:13:44.520144067Z" level=info msg="StopPodSandbox for \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\" returns successfully" Jan 29 11:13:44.520510 containerd[1547]: time="2025-01-29T11:13:44.520485267Z" level=info msg="RemovePodSandbox for \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\"" Jan 29 11:13:44.520577 containerd[1547]: time="2025-01-29T11:13:44.520511307Z" level=info msg="Forcibly stopping sandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\"" Jan 29 11:13:44.520577 containerd[1547]: time="2025-01-29T11:13:44.520572787Z" level=info msg="TearDown network for sandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\" successfully" Jan 29 11:13:44.522926 containerd[1547]: time="2025-01-29T11:13:44.522895029Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.522981 containerd[1547]: time="2025-01-29T11:13:44.522949070Z" level=info msg="RemovePodSandbox \"3053ee4799567fadd8f7ad9599c71887c19476a04395e04ce1622cea446c4666\" returns successfully" Jan 29 11:13:44.523459 containerd[1547]: time="2025-01-29T11:13:44.523276390Z" level=info msg="StopPodSandbox for \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\"" Jan 29 11:13:44.523459 containerd[1547]: time="2025-01-29T11:13:44.523374870Z" level=info msg="TearDown network for sandbox \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\" successfully" Jan 29 11:13:44.523459 containerd[1547]: time="2025-01-29T11:13:44.523386030Z" level=info msg="StopPodSandbox for \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\" returns successfully" Jan 29 11:13:44.523663 containerd[1547]: time="2025-01-29T11:13:44.523620790Z" level=info msg="RemovePodSandbox for \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\"" Jan 29 11:13:44.523663 containerd[1547]: time="2025-01-29T11:13:44.523651550Z" level=info msg="Forcibly stopping sandbox \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\"" Jan 29 11:13:44.523735 containerd[1547]: time="2025-01-29T11:13:44.523720790Z" level=info msg="TearDown network for sandbox \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\" successfully" Jan 29 11:13:44.526200 containerd[1547]: time="2025-01-29T11:13:44.526163713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.526259 containerd[1547]: time="2025-01-29T11:13:44.526243833Z" level=info msg="RemovePodSandbox \"aefe0cc9f9ff518257c51187b83d693a9b0207369935c6395464b03603d0cd52\" returns successfully" Jan 29 11:13:44.526587 containerd[1547]: time="2025-01-29T11:13:44.526560433Z" level=info msg="StopPodSandbox for \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\"" Jan 29 11:13:44.526653 containerd[1547]: time="2025-01-29T11:13:44.526639273Z" level=info msg="TearDown network for sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" successfully" Jan 29 11:13:44.526679 containerd[1547]: time="2025-01-29T11:13:44.526652633Z" level=info msg="StopPodSandbox for \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" returns successfully" Jan 29 11:13:44.526903 containerd[1547]: time="2025-01-29T11:13:44.526883194Z" level=info msg="RemovePodSandbox for \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\"" Jan 29 11:13:44.526950 containerd[1547]: time="2025-01-29T11:13:44.526907514Z" level=info msg="Forcibly stopping sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\"" Jan 29 11:13:44.526976 containerd[1547]: time="2025-01-29T11:13:44.526963314Z" level=info msg="TearDown network for sandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" successfully" Jan 29 11:13:44.529344 containerd[1547]: time="2025-01-29T11:13:44.529308116Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.529420 containerd[1547]: time="2025-01-29T11:13:44.529361956Z" level=info msg="RemovePodSandbox \"124cd1563c8a762aa93bb6c1261538d84a8b6eaebc585abf67847b3ff463a8ce\" returns successfully" Jan 29 11:13:44.529652 containerd[1547]: time="2025-01-29T11:13:44.529630597Z" level=info msg="StopPodSandbox for \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\"" Jan 29 11:13:44.529860 containerd[1547]: time="2025-01-29T11:13:44.529790917Z" level=info msg="TearDown network for sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\" successfully" Jan 29 11:13:44.529860 containerd[1547]: time="2025-01-29T11:13:44.529805757Z" level=info msg="StopPodSandbox for \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\" returns successfully" Jan 29 11:13:44.530058 containerd[1547]: time="2025-01-29T11:13:44.530035317Z" level=info msg="RemovePodSandbox for \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\"" Jan 29 11:13:44.530097 containerd[1547]: time="2025-01-29T11:13:44.530065557Z" level=info msg="Forcibly stopping sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\"" Jan 29 11:13:44.530141 containerd[1547]: time="2025-01-29T11:13:44.530127917Z" level=info msg="TearDown network for sandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\" successfully" Jan 29 11:13:44.532439 containerd[1547]: time="2025-01-29T11:13:44.532391080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.532504 containerd[1547]: time="2025-01-29T11:13:44.532462120Z" level=info msg="RemovePodSandbox \"a95f1746ba14e82803b593a022abc34732f8bd96e656aa4473670302e87bd2ca\" returns successfully" Jan 29 11:13:44.532962 containerd[1547]: time="2025-01-29T11:13:44.532823360Z" level=info msg="StopPodSandbox for \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\"" Jan 29 11:13:44.532962 containerd[1547]: time="2025-01-29T11:13:44.532899240Z" level=info msg="TearDown network for sandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\" successfully" Jan 29 11:13:44.532962 containerd[1547]: time="2025-01-29T11:13:44.532909640Z" level=info msg="StopPodSandbox for \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\" returns successfully" Jan 29 11:13:44.533194 containerd[1547]: time="2025-01-29T11:13:44.533164640Z" level=info msg="RemovePodSandbox for \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\"" Jan 29 11:13:44.534364 containerd[1547]: time="2025-01-29T11:13:44.533250641Z" level=info msg="Forcibly stopping sandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\"" Jan 29 11:13:44.534364 containerd[1547]: time="2025-01-29T11:13:44.533332121Z" level=info msg="TearDown network for sandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\" successfully" Jan 29 11:13:44.535786 containerd[1547]: time="2025-01-29T11:13:44.535757643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.535939 containerd[1547]: time="2025-01-29T11:13:44.535920763Z" level=info msg="RemovePodSandbox \"c3b61c29349d58578e11d1d851159df1422f4e34e1db28bcbc09b073677a87e1\" returns successfully" Jan 29 11:13:44.536291 containerd[1547]: time="2025-01-29T11:13:44.536264924Z" level=info msg="StopPodSandbox for \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\"" Jan 29 11:13:44.536379 containerd[1547]: time="2025-01-29T11:13:44.536362484Z" level=info msg="TearDown network for sandbox \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\" successfully" Jan 29 11:13:44.536379 containerd[1547]: time="2025-01-29T11:13:44.536377684Z" level=info msg="StopPodSandbox for \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\" returns successfully" Jan 29 11:13:44.537737 containerd[1547]: time="2025-01-29T11:13:44.536646364Z" level=info msg="RemovePodSandbox for \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\"" Jan 29 11:13:44.537737 containerd[1547]: time="2025-01-29T11:13:44.536673324Z" level=info msg="Forcibly stopping sandbox \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\"" Jan 29 11:13:44.537737 containerd[1547]: time="2025-01-29T11:13:44.536730324Z" level=info msg="TearDown network for sandbox \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\" successfully" Jan 29 11:13:44.538825 containerd[1547]: time="2025-01-29T11:13:44.538796006Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.538953 containerd[1547]: time="2025-01-29T11:13:44.538935247Z" level=info msg="RemovePodSandbox \"55d80ef1ae3d96721b3c6934a00c0294dbf2190b2a140863990e6b09c65f12eb\" returns successfully" Jan 29 11:13:44.539548 containerd[1547]: time="2025-01-29T11:13:44.539524607Z" level=info msg="StopPodSandbox for \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\"" Jan 29 11:13:44.539818 containerd[1547]: time="2025-01-29T11:13:44.539798888Z" level=info msg="TearDown network for sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" successfully" Jan 29 11:13:44.539892 containerd[1547]: time="2025-01-29T11:13:44.539878888Z" level=info msg="StopPodSandbox for \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" returns successfully" Jan 29 11:13:44.540345 containerd[1547]: time="2025-01-29T11:13:44.540324608Z" level=info msg="RemovePodSandbox for \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\"" Jan 29 11:13:44.540465 containerd[1547]: time="2025-01-29T11:13:44.540447528Z" level=info msg="Forcibly stopping sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\"" Jan 29 11:13:44.540568 containerd[1547]: time="2025-01-29T11:13:44.540552408Z" level=info msg="TearDown network for sandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" successfully" Jan 29 11:13:44.546776 containerd[1547]: time="2025-01-29T11:13:44.546740495Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.546934 containerd[1547]: time="2025-01-29T11:13:44.546914135Z" level=info msg="RemovePodSandbox \"266aa1763d9e35aac11bfb0db08f8e0014f43794f54dd15570c416aa97815c83\" returns successfully" Jan 29 11:13:44.547343 containerd[1547]: time="2025-01-29T11:13:44.547317296Z" level=info msg="StopPodSandbox for \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\"" Jan 29 11:13:44.547550 containerd[1547]: time="2025-01-29T11:13:44.547533896Z" level=info msg="TearDown network for sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\" successfully" Jan 29 11:13:44.547618 containerd[1547]: time="2025-01-29T11:13:44.547605536Z" level=info msg="StopPodSandbox for \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\" returns successfully" Jan 29 11:13:44.548015 containerd[1547]: time="2025-01-29T11:13:44.547962496Z" level=info msg="RemovePodSandbox for \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\"" Jan 29 11:13:44.548072 containerd[1547]: time="2025-01-29T11:13:44.548018576Z" level=info msg="Forcibly stopping sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\"" Jan 29 11:13:44.548094 containerd[1547]: time="2025-01-29T11:13:44.548080656Z" level=info msg="TearDown network for sandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\" successfully" Jan 29 11:13:44.574424 containerd[1547]: time="2025-01-29T11:13:44.574367324Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.574612 containerd[1547]: time="2025-01-29T11:13:44.574446365Z" level=info msg="RemovePodSandbox \"11130a8786703d7fec2ee64604f4dfd319971061573c2fea01baa608a9315aea\" returns successfully" Jan 29 11:13:44.575012 containerd[1547]: time="2025-01-29T11:13:44.574858005Z" level=info msg="StopPodSandbox for \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\"" Jan 29 11:13:44.575012 containerd[1547]: time="2025-01-29T11:13:44.574946685Z" level=info msg="TearDown network for sandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\" successfully" Jan 29 11:13:44.575012 containerd[1547]: time="2025-01-29T11:13:44.574957285Z" level=info msg="StopPodSandbox for \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\" returns successfully" Jan 29 11:13:44.576021 containerd[1547]: time="2025-01-29T11:13:44.575252045Z" level=info msg="RemovePodSandbox for \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\"" Jan 29 11:13:44.576021 containerd[1547]: time="2025-01-29T11:13:44.575269605Z" level=info msg="Forcibly stopping sandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\"" Jan 29 11:13:44.576021 containerd[1547]: time="2025-01-29T11:13:44.575330325Z" level=info msg="TearDown network for sandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\" successfully" Jan 29 11:13:44.599177 containerd[1547]: time="2025-01-29T11:13:44.599146151Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.599374 containerd[1547]: time="2025-01-29T11:13:44.599341591Z" level=info msg="RemovePodSandbox \"a68ea60b2c82b0f2f9ec472162b0b3f19f702d2192b46903c27e57f85d4ca24d\" returns successfully" Jan 29 11:13:44.599778 containerd[1547]: time="2025-01-29T11:13:44.599755312Z" level=info msg="StopPodSandbox for \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\"" Jan 29 11:13:44.599937 containerd[1547]: time="2025-01-29T11:13:44.599920152Z" level=info msg="TearDown network for sandbox \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\" successfully" Jan 29 11:13:44.599996 containerd[1547]: time="2025-01-29T11:13:44.599983992Z" level=info msg="StopPodSandbox for \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\" returns successfully" Jan 29 11:13:44.600331 containerd[1547]: time="2025-01-29T11:13:44.600296112Z" level=info msg="RemovePodSandbox for \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\"" Jan 29 11:13:44.600394 containerd[1547]: time="2025-01-29T11:13:44.600337792Z" level=info msg="Forcibly stopping sandbox \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\"" Jan 29 11:13:44.600441 containerd[1547]: time="2025-01-29T11:13:44.600401712Z" level=info msg="TearDown network for sandbox \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\" successfully" Jan 29 11:13:44.602724 containerd[1547]: time="2025-01-29T11:13:44.602695875Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.602782 containerd[1547]: time="2025-01-29T11:13:44.602745075Z" level=info msg="RemovePodSandbox \"11643256407ac90fbbeaaad68a62204eb57d2735aa62654ebfc0e725d28f5a32\" returns successfully" Jan 29 11:13:44.603057 containerd[1547]: time="2025-01-29T11:13:44.603035795Z" level=info msg="StopPodSandbox for \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\"" Jan 29 11:13:44.603136 containerd[1547]: time="2025-01-29T11:13:44.603119795Z" level=info msg="TearDown network for sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" successfully" Jan 29 11:13:44.603136 containerd[1547]: time="2025-01-29T11:13:44.603134355Z" level=info msg="StopPodSandbox for \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" returns successfully" Jan 29 11:13:44.604524 containerd[1547]: time="2025-01-29T11:13:44.603366595Z" level=info msg="RemovePodSandbox for \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\"" Jan 29 11:13:44.604524 containerd[1547]: time="2025-01-29T11:13:44.603391795Z" level=info msg="Forcibly stopping sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\"" Jan 29 11:13:44.604524 containerd[1547]: time="2025-01-29T11:13:44.603462795Z" level=info msg="TearDown network for sandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" successfully" Jan 29 11:13:44.605701 containerd[1547]: time="2025-01-29T11:13:44.605675438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.605815 containerd[1547]: time="2025-01-29T11:13:44.605799118Z" level=info msg="RemovePodSandbox \"4e691fe50c8f6ade48b4caf05e2f301af762bd9b12932f249cf138ec1195456f\" returns successfully" Jan 29 11:13:44.606169 containerd[1547]: time="2025-01-29T11:13:44.606148038Z" level=info msg="StopPodSandbox for \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\"" Jan 29 11:13:44.606392 containerd[1547]: time="2025-01-29T11:13:44.606374039Z" level=info msg="TearDown network for sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\" successfully" Jan 29 11:13:44.606480 containerd[1547]: time="2025-01-29T11:13:44.606466679Z" level=info msg="StopPodSandbox for \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\" returns successfully" Jan 29 11:13:44.606791 containerd[1547]: time="2025-01-29T11:13:44.606766839Z" level=info msg="RemovePodSandbox for \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\"" Jan 29 11:13:44.606945 containerd[1547]: time="2025-01-29T11:13:44.606907599Z" level=info msg="Forcibly stopping sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\"" Jan 29 11:13:44.607082 containerd[1547]: time="2025-01-29T11:13:44.607066199Z" level=info msg="TearDown network for sandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\" successfully" Jan 29 11:13:44.609750 containerd[1547]: time="2025-01-29T11:13:44.609720722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.609889 containerd[1547]: time="2025-01-29T11:13:44.609873122Z" level=info msg="RemovePodSandbox \"89461e80c6a73a971e500f2aa46238a7cba35a27d044893e658e9fbe1832e23f\" returns successfully" Jan 29 11:13:44.610320 containerd[1547]: time="2025-01-29T11:13:44.610287123Z" level=info msg="StopPodSandbox for \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\"" Jan 29 11:13:44.610388 containerd[1547]: time="2025-01-29T11:13:44.610377563Z" level=info msg="TearDown network for sandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\" successfully" Jan 29 11:13:44.610434 containerd[1547]: time="2025-01-29T11:13:44.610389443Z" level=info msg="StopPodSandbox for \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\" returns successfully" Jan 29 11:13:44.610692 containerd[1547]: time="2025-01-29T11:13:44.610658323Z" level=info msg="RemovePodSandbox for \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\"" Jan 29 11:13:44.610742 containerd[1547]: time="2025-01-29T11:13:44.610697083Z" level=info msg="Forcibly stopping sandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\"" Jan 29 11:13:44.610774 containerd[1547]: time="2025-01-29T11:13:44.610761483Z" level=info msg="TearDown network for sandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\" successfully" Jan 29 11:13:44.612903 containerd[1547]: time="2025-01-29T11:13:44.612867646Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.612942 containerd[1547]: time="2025-01-29T11:13:44.612915046Z" level=info msg="RemovePodSandbox \"4851246c81aa523ef2a80b0ec96fec1ed458fe3b40bbcc5bce4599085c6e7326\" returns successfully" Jan 29 11:13:44.613220 containerd[1547]: time="2025-01-29T11:13:44.613193846Z" level=info msg="StopPodSandbox for \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\"" Jan 29 11:13:44.613291 containerd[1547]: time="2025-01-29T11:13:44.613275126Z" level=info msg="TearDown network for sandbox \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\" successfully" Jan 29 11:13:44.613331 containerd[1547]: time="2025-01-29T11:13:44.613289966Z" level=info msg="StopPodSandbox for \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\" returns successfully" Jan 29 11:13:44.613581 containerd[1547]: time="2025-01-29T11:13:44.613559606Z" level=info msg="RemovePodSandbox for \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\"" Jan 29 11:13:44.613622 containerd[1547]: time="2025-01-29T11:13:44.613584326Z" level=info msg="Forcibly stopping sandbox \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\"" Jan 29 11:13:44.613670 containerd[1547]: time="2025-01-29T11:13:44.613644166Z" level=info msg="TearDown network for sandbox \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\" successfully" Jan 29 11:13:44.615953 containerd[1547]: time="2025-01-29T11:13:44.615921849Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.616005 containerd[1547]: time="2025-01-29T11:13:44.615973209Z" level=info msg="RemovePodSandbox \"ae73915aedefe9a86b17fd3e86029b82060829ad2a93f78ab38dbf9e2fda15b8\" returns successfully" Jan 29 11:13:44.616297 containerd[1547]: time="2025-01-29T11:13:44.616273969Z" level=info msg="StopPodSandbox for \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\"" Jan 29 11:13:44.616377 containerd[1547]: time="2025-01-29T11:13:44.616362369Z" level=info msg="TearDown network for sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" successfully" Jan 29 11:13:44.616377 containerd[1547]: time="2025-01-29T11:13:44.616375289Z" level=info msg="StopPodSandbox for \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" returns successfully" Jan 29 11:13:44.616675 containerd[1547]: time="2025-01-29T11:13:44.616638650Z" level=info msg="RemovePodSandbox for \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\"" Jan 29 11:13:44.616675 containerd[1547]: time="2025-01-29T11:13:44.616670090Z" level=info msg="Forcibly stopping sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\"" Jan 29 11:13:44.616736 containerd[1547]: time="2025-01-29T11:13:44.616727490Z" level=info msg="TearDown network for sandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" successfully" Jan 29 11:13:44.619133 containerd[1547]: time="2025-01-29T11:13:44.619103732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.619254 containerd[1547]: time="2025-01-29T11:13:44.619153652Z" level=info msg="RemovePodSandbox \"b6c6b8ddc4e6145f67dd880a88fc2e2f1f1e933f84f5a28dfcca480af6da275b\" returns successfully" Jan 29 11:13:44.619482 containerd[1547]: time="2025-01-29T11:13:44.619439013Z" level=info msg="StopPodSandbox for \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\"" Jan 29 11:13:44.619547 containerd[1547]: time="2025-01-29T11:13:44.619526853Z" level=info msg="TearDown network for sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\" successfully" Jan 29 11:13:44.619547 containerd[1547]: time="2025-01-29T11:13:44.619541853Z" level=info msg="StopPodSandbox for \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\" returns successfully" Jan 29 11:13:44.619794 containerd[1547]: time="2025-01-29T11:13:44.619761613Z" level=info msg="RemovePodSandbox for \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\"" Jan 29 11:13:44.619794 containerd[1547]: time="2025-01-29T11:13:44.619789093Z" level=info msg="Forcibly stopping sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\"" Jan 29 11:13:44.619869 containerd[1547]: time="2025-01-29T11:13:44.619843653Z" level=info msg="TearDown network for sandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\" successfully" Jan 29 11:13:44.622111 containerd[1547]: time="2025-01-29T11:13:44.622071255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.622159 containerd[1547]: time="2025-01-29T11:13:44.622122855Z" level=info msg="RemovePodSandbox \"02a1129d23ab80443ca0fb08a4586c4b39645e84477eac30311e28b82ef34a6e\" returns successfully" Jan 29 11:13:44.622469 containerd[1547]: time="2025-01-29T11:13:44.622398816Z" level=info msg="StopPodSandbox for \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\"" Jan 29 11:13:44.622517 containerd[1547]: time="2025-01-29T11:13:44.622481256Z" level=info msg="TearDown network for sandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\" successfully" Jan 29 11:13:44.622517 containerd[1547]: time="2025-01-29T11:13:44.622491176Z" level=info msg="StopPodSandbox for \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\" returns successfully" Jan 29 11:13:44.623881 containerd[1547]: time="2025-01-29T11:13:44.622829176Z" level=info msg="RemovePodSandbox for \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\"" Jan 29 11:13:44.623881 containerd[1547]: time="2025-01-29T11:13:44.622855656Z" level=info msg="Forcibly stopping sandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\"" Jan 29 11:13:44.623881 containerd[1547]: time="2025-01-29T11:13:44.622912096Z" level=info msg="TearDown network for sandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\" successfully" Jan 29 11:13:44.625172 containerd[1547]: time="2025-01-29T11:13:44.625121259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.625288 containerd[1547]: time="2025-01-29T11:13:44.625269899Z" level=info msg="RemovePodSandbox \"8c29ac1ae10c8422b3bb432cd93e050894ae69f426a886be7eaa6c3376a88094\" returns successfully" Jan 29 11:13:44.625645 containerd[1547]: time="2025-01-29T11:13:44.625620459Z" level=info msg="StopPodSandbox for \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\"" Jan 29 11:13:44.625731 containerd[1547]: time="2025-01-29T11:13:44.625713579Z" level=info msg="TearDown network for sandbox \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\" successfully" Jan 29 11:13:44.625731 containerd[1547]: time="2025-01-29T11:13:44.625727979Z" level=info msg="StopPodSandbox for \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\" returns successfully" Jan 29 11:13:44.626081 containerd[1547]: time="2025-01-29T11:13:44.626046100Z" level=info msg="RemovePodSandbox for \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\"" Jan 29 11:13:44.626116 containerd[1547]: time="2025-01-29T11:13:44.626088220Z" level=info msg="Forcibly stopping sandbox \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\"" Jan 29 11:13:44.626162 containerd[1547]: time="2025-01-29T11:13:44.626147940Z" level=info msg="TearDown network for sandbox \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\" successfully" Jan 29 11:13:44.628255 containerd[1547]: time="2025-01-29T11:13:44.628221262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.628310 containerd[1547]: time="2025-01-29T11:13:44.628267182Z" level=info msg="RemovePodSandbox \"6e67af127a7e5e8e2e657e8a7e9e805bce3ad13ec433c23b870bb8da31bdf584\" returns successfully" Jan 29 11:13:44.628714 containerd[1547]: time="2025-01-29T11:13:44.628570782Z" level=info msg="StopPodSandbox for \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\"" Jan 29 11:13:44.628714 containerd[1547]: time="2025-01-29T11:13:44.628650462Z" level=info msg="TearDown network for sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" successfully" Jan 29 11:13:44.628714 containerd[1547]: time="2025-01-29T11:13:44.628660022Z" level=info msg="StopPodSandbox for \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" returns successfully" Jan 29 11:13:44.628921 containerd[1547]: time="2025-01-29T11:13:44.628899863Z" level=info msg="RemovePodSandbox for \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\"" Jan 29 11:13:44.628952 containerd[1547]: time="2025-01-29T11:13:44.628926263Z" level=info msg="Forcibly stopping sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\"" Jan 29 11:13:44.628994 containerd[1547]: time="2025-01-29T11:13:44.628981303Z" level=info msg="TearDown network for sandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" successfully" Jan 29 11:13:44.631437 containerd[1547]: time="2025-01-29T11:13:44.631317265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.631518 containerd[1547]: time="2025-01-29T11:13:44.631459985Z" level=info msg="RemovePodSandbox \"43c07615828dc2d8bcb56870ee74284c21a992541f50b63fef1c9fd5bd5614fe\" returns successfully" Jan 29 11:13:44.631845 containerd[1547]: time="2025-01-29T11:13:44.631818626Z" level=info msg="StopPodSandbox for \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\"" Jan 29 11:13:44.631916 containerd[1547]: time="2025-01-29T11:13:44.631901946Z" level=info msg="TearDown network for sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\" successfully" Jan 29 11:13:44.631949 containerd[1547]: time="2025-01-29T11:13:44.631915386Z" level=info msg="StopPodSandbox for \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\" returns successfully" Jan 29 11:13:44.632112 containerd[1547]: time="2025-01-29T11:13:44.632095986Z" level=info msg="RemovePodSandbox for \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\"" Jan 29 11:13:44.632142 containerd[1547]: time="2025-01-29T11:13:44.632118146Z" level=info msg="Forcibly stopping sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\"" Jan 29 11:13:44.632180 containerd[1547]: time="2025-01-29T11:13:44.632166986Z" level=info msg="TearDown network for sandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\" successfully" Jan 29 11:13:44.634703 containerd[1547]: time="2025-01-29T11:13:44.634661189Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.634754 containerd[1547]: time="2025-01-29T11:13:44.634717869Z" level=info msg="RemovePodSandbox \"4bea23bcc5b5fe5ba15b7ae006153f1f58244f32da21a18e56e9da0cf279e8f5\" returns successfully" Jan 29 11:13:44.635057 containerd[1547]: time="2025-01-29T11:13:44.635025789Z" level=info msg="StopPodSandbox for \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\"" Jan 29 11:13:44.635107 containerd[1547]: time="2025-01-29T11:13:44.635098709Z" level=info msg="TearDown network for sandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\" successfully" Jan 29 11:13:44.635129 containerd[1547]: time="2025-01-29T11:13:44.635108469Z" level=info msg="StopPodSandbox for \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\" returns successfully" Jan 29 11:13:44.635337 containerd[1547]: time="2025-01-29T11:13:44.635315990Z" level=info msg="RemovePodSandbox for \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\"" Jan 29 11:13:44.635380 containerd[1547]: time="2025-01-29T11:13:44.635339950Z" level=info msg="Forcibly stopping sandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\"" Jan 29 11:13:44.635428 containerd[1547]: time="2025-01-29T11:13:44.635399110Z" level=info msg="TearDown network for sandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\" successfully" Jan 29 11:13:44.637808 containerd[1547]: time="2025-01-29T11:13:44.637767312Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.637850 containerd[1547]: time="2025-01-29T11:13:44.637825872Z" level=info msg="RemovePodSandbox \"d42722f50073d2c1d92105d79bd076a094762ccadd2c66ba85877abd85a39034\" returns successfully" Jan 29 11:13:44.638211 containerd[1547]: time="2025-01-29T11:13:44.638173273Z" level=info msg="StopPodSandbox for \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\"" Jan 29 11:13:44.638270 containerd[1547]: time="2025-01-29T11:13:44.638250633Z" level=info msg="TearDown network for sandbox \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\" successfully" Jan 29 11:13:44.638270 containerd[1547]: time="2025-01-29T11:13:44.638261593Z" level=info msg="StopPodSandbox for \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\" returns successfully" Jan 29 11:13:44.639672 containerd[1547]: time="2025-01-29T11:13:44.638553193Z" level=info msg="RemovePodSandbox for \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\"" Jan 29 11:13:44.639672 containerd[1547]: time="2025-01-29T11:13:44.638578753Z" level=info msg="Forcibly stopping sandbox \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\"" Jan 29 11:13:44.639672 containerd[1547]: time="2025-01-29T11:13:44.638645113Z" level=info msg="TearDown network for sandbox \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\" successfully" Jan 29 11:13:44.640835 containerd[1547]: time="2025-01-29T11:13:44.640804875Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:13:44.640940 containerd[1547]: time="2025-01-29T11:13:44.640923795Z" level=info msg="RemovePodSandbox \"878a8ca132eb54184a7c3eb7d0edbb1d95b2b6cbdae11c953f72a52e5dfa9ecc\" returns successfully" Jan 29 11:13:47.593629 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:55456.service - OpenSSH per-connection server daemon (10.0.0.1:55456). Jan 29 11:13:47.640349 sshd[6047]: Accepted publickey for core from 10.0.0.1 port 55456 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:13:47.641525 sshd-session[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:13:47.645457 systemd-logind[1528]: New session 20 of user core. Jan 29 11:13:47.649638 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:13:47.827689 sshd[6050]: Connection closed by 10.0.0.1 port 55456 Jan 29 11:13:47.828274 sshd-session[6047]: pam_unix(sshd:session): session closed for user core Jan 29 11:13:47.831319 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:55456.service: Deactivated successfully. Jan 29 11:13:47.833896 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:13:47.835017 systemd-logind[1528]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:13:47.836652 systemd-logind[1528]: Removed session 20.