Feb 13 15:29:36.892651 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:29:36.892676 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025 Feb 13 15:29:36.892686 kernel: KASLR enabled Feb 13 15:29:36.892692 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 15:29:36.892698 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Feb 13 15:29:36.892703 kernel: random: crng init done Feb 13 15:29:36.892710 kernel: secureboot: Secure boot disabled Feb 13 15:29:36.892716 kernel: ACPI: Early table checksum verification disabled Feb 13 15:29:36.892722 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 15:29:36.892729 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:29:36.892736 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:36.892741 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:36.892747 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:36.892753 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:36.892760 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:36.892768 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:36.892774 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:36.892781 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:36.892787 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:29:36.892793 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:29:36.892799 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 15:29:36.892805 kernel: NUMA: Failed to initialise from firmware Feb 13 15:29:36.892811 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:29:36.892817 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 15:29:36.892823 kernel: Zone ranges: Feb 13 15:29:36.892831 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:29:36.892837 kernel: DMA32 empty Feb 13 15:29:36.892843 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 15:29:36.892849 kernel: Movable zone start for each node Feb 13 15:29:36.892855 kernel: Early memory node ranges Feb 13 15:29:36.892861 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Feb 13 15:29:36.892867 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Feb 13 15:29:36.892873 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Feb 13 15:29:36.892879 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 15:29:36.892885 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 15:29:36.892891 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 15:29:36.892898 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 15:29:36.892905 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 15:29:36.892912 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 15:29:36.892918 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:29:36.892927 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 15:29:36.892966 kernel: psci: probing for conduit method from ACPI. Feb 13 15:29:36.892978 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:29:36.892988 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:29:36.892996 kernel: psci: Trusted OS migration not required Feb 13 15:29:36.893003 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:29:36.893011 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:29:36.893017 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:29:36.893024 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:29:36.893031 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:29:36.893037 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:29:36.893044 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:29:36.893051 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:29:36.893059 kernel: CPU features: detected: Spectre-v4 Feb 13 15:29:36.893066 kernel: CPU features: detected: Spectre-BHB Feb 13 15:29:36.893072 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:29:36.893078 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:29:36.893085 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:29:36.893091 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:29:36.893098 kernel: alternatives: applying boot alternatives Feb 13 15:29:36.893106 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:29:36.893112 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:29:36.893119 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:29:36.893126 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:29:36.893133 kernel: Fallback order for Node 0: 0 Feb 13 15:29:36.893140 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 15:29:36.893146 kernel: Policy zone: Normal Feb 13 15:29:36.893153 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:29:36.893160 kernel: software IO TLB: area num 2. Feb 13 15:29:36.893166 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 15:29:36.893174 kernel: Memory: 3882296K/4096000K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 213704K reserved, 0K cma-reserved) Feb 13 15:29:36.893180 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:29:36.893187 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:29:36.893195 kernel: rcu: RCU event tracing is enabled. Feb 13 15:29:36.893201 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:29:36.893208 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:29:36.893216 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:29:36.893222 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:29:36.893229 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:29:36.893235 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:29:36.893242 kernel: GICv3: 256 SPIs implemented Feb 13 15:29:36.893249 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:29:36.893255 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:29:36.893261 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:29:36.893268 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:29:36.893275 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:29:36.893281 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:29:36.893289 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:29:36.893296 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 15:29:36.893303 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 15:29:36.893309 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:29:36.893316 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:29:36.893334 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:29:36.893341 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:29:36.893365 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:29:36.893373 kernel: Console: colour dummy device 80x25 Feb 13 15:29:36.893381 kernel: ACPI: Core revision 20230628 Feb 13 15:29:36.893388 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:29:36.893397 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:29:36.893404 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:29:36.893411 kernel: landlock: Up and running. Feb 13 15:29:36.893418 kernel: SELinux: Initializing. Feb 13 15:29:36.893424 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:29:36.893431 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:29:36.893437 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:29:36.893444 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:29:36.893451 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:29:36.893459 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:29:36.893466 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:29:36.893473 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:29:36.893479 kernel: Remapping and enabling EFI services. Feb 13 15:29:36.893486 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:29:36.893492 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:29:36.893499 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:29:36.893506 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 15:29:36.893512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:29:36.893520 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:29:36.893527 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:29:36.893538 kernel: SMP: Total of 2 processors activated. Feb 13 15:29:36.893547 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:29:36.893554 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:29:36.893561 kernel: CPU features: detected: Common not Private translations Feb 13 15:29:36.893568 kernel: CPU features: detected: CRC32 instructions Feb 13 15:29:36.893575 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:29:36.893582 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:29:36.893590 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:29:36.893597 kernel: CPU features: detected: Privileged Access Never Feb 13 15:29:36.893604 kernel: CPU features: detected: RAS Extension Support Feb 13 15:29:36.893612 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:29:36.893619 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:29:36.893626 kernel: alternatives: applying system-wide alternatives Feb 13 15:29:36.893633 kernel: devtmpfs: initialized Feb 13 15:29:36.893640 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:29:36.893649 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:29:36.893657 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:29:36.893664 kernel: SMBIOS 3.0.0 present. Feb 13 15:29:36.893671 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 15:29:36.893678 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:29:36.893685 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:29:36.893692 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:29:36.893700 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:29:36.893706 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:29:36.893715 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Feb 13 15:29:36.893723 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:29:36.893730 kernel: cpuidle: using governor menu Feb 13 15:29:36.893737 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:29:36.893744 kernel: ASID allocator initialised with 32768 entries Feb 13 15:29:36.893751 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:29:36.893758 kernel: Serial: AMBA PL011 UART driver Feb 13 15:29:36.893765 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:29:36.893773 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:29:36.893781 kernel: Modules: 508880 pages in range for PLT usage Feb 13 15:29:36.893789 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:29:36.893796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:29:36.893803 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:29:36.893810 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:29:36.893818 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:29:36.893825 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:29:36.893833 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:29:36.893840 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:29:36.893848 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:29:36.893855 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:29:36.893862 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:29:36.893869 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:29:36.893877 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:29:36.893883 kernel: ACPI: Interpreter enabled Feb 13 15:29:36.893892 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:29:36.893901 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:29:36.893909 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:29:36.893919 kernel: printk: console [ttyAMA0] enabled Feb 13 15:29:36.893927 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:29:36.894118 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:29:36.894196 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:29:36.894261 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:29:36.894359 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:29:36.894427 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:29:36.894442 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:29:36.894449 kernel: PCI host bridge to bus 0000:00 Feb 13 15:29:36.894521 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:29:36.894591 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:29:36.894652 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:29:36.894712 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:29:36.894849 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:29:36.894974 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 15:29:36.895059 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 15:29:36.895127 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:29:36.895204 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:36.895413 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 15:29:36.895589 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:36.895668 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 15:29:36.895753 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:36.895821 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 15:29:36.895895 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:36.895976 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 15:29:36.896053 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:36.896121 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 15:29:36.896199 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:36.896266 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 15:29:36.896411 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:36.896485 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 15:29:36.896560 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:36.896632 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 15:29:36.896708 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:29:36.896774 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 15:29:36.896847 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 15:29:36.896913 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 15:29:36.897054 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:29:36.897132 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 15:29:36.897218 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:29:36.897289 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:29:36.897458 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 15:29:36.897536 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 15:29:36.897618 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 15:29:36.897689 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 15:29:36.897761 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 15:29:36.897834 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 15:29:36.897904 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 15:29:36.897993 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 15:29:36.898061 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 15:29:36.898131 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 15:29:36.898211 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 15:29:36.898283 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 15:29:36.899263 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:29:36.899375 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:29:36.899446 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 15:29:36.899512 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 15:29:36.899582 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:29:36.899667 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 15:29:36.899734 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:29:36.899799 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:29:36.899867 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 15:29:36.899972 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 15:29:36.900052 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 15:29:36.900128 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 15:29:36.900198 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:29:36.900389 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:29:36.900473 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 15:29:36.900544 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 15:29:36.900610 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 15:29:36.900680 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 15:29:36.900747 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:29:36.900843 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:29:36.900925 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 15:29:36.901011 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:29:36.901079 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:29:36.901151 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 15:29:36.901217 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:29:36.901281 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:29:36.902461 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 15:29:36.902554 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:29:36.902623 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:29:36.902696 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 15:29:36.902762 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:29:36.902827 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:29:36.902906 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 15:29:36.902995 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:29:36.903075 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 15:29:36.903142 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:29:36.903210 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 15:29:36.903275 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:29:36.904097 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 15:29:36.904176 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:29:36.904249 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 15:29:36.904348 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:29:36.904434 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 15:29:36.904511 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:29:36.904580 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 15:29:36.904686 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:29:36.904760 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 15:29:36.904828 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:29:36.904902 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 15:29:36.905030 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:29:36.905110 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 15:29:36.905178 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 15:29:36.905248 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 15:29:36.905318 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 15:29:36.905470 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 15:29:36.905544 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 15:29:36.905613 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 15:29:36.905678 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 15:29:36.905745 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 15:29:36.905837 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 15:29:36.905925 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 15:29:36.906011 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 15:29:36.906088 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 15:29:36.906160 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 15:29:36.906226 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 15:29:36.906289 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 15:29:36.907418 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 15:29:36.907498 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 15:29:36.907564 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 15:29:36.907628 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 15:29:36.907697 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 15:29:36.907770 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 15:29:36.907847 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:29:36.907914 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 15:29:36.908054 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 15:29:36.908130 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 15:29:36.908193 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 15:29:36.908255 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:29:36.908412 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 15:29:36.908502 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 15:29:36.908568 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 15:29:36.908630 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 15:29:36.908694 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:29:36.908767 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:29:36.908837 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 15:29:36.908903 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 15:29:36.909021 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 15:29:36.909096 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 15:29:36.909161 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:29:36.909235 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:29:36.909303 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 15:29:36.910472 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 15:29:36.910557 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 15:29:36.910624 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:29:36.910697 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 15:29:36.910785 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 15:29:36.910854 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 15:29:36.910919 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 15:29:36.911003 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 15:29:36.911073 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:29:36.911153 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 15:29:36.911220 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 15:29:36.911288 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 15:29:36.911389 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 15:29:36.911457 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 15:29:36.911524 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:29:36.911598 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 15:29:36.911665 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 15:29:36.911738 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 15:29:36.911808 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 15:29:36.911872 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 15:29:36.911943 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 15:29:36.912015 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:29:36.912086 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 15:29:36.912152 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 15:29:36.912219 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 15:29:36.912283 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:29:36.913031 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 15:29:36.913113 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 15:29:36.913177 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 15:29:36.913243 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:29:36.913313 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:29:36.913429 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:29:36.913512 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:29:36.913595 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 15:29:36.913658 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 15:29:36.913717 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:29:36.913787 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 15:29:36.913847 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 15:29:36.913910 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:29:36.914020 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 15:29:36.914091 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 15:29:36.914151 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:29:36.914219 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 15:29:36.914280 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 15:29:36.914419 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:29:36.914500 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 15:29:36.914565 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 15:29:36.914628 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:29:36.914695 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 15:29:36.914758 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 15:29:36.914818 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:29:36.914890 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 15:29:36.914998 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 15:29:36.915058 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:29:36.915126 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 15:29:36.915190 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 15:29:36.915266 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:29:36.915418 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 15:29:36.915484 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 15:29:36.915543 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:29:36.915553 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:29:36.915561 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:29:36.915569 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:29:36.915577 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:29:36.915588 kernel: iommu: Default domain type: Translated Feb 13 15:29:36.915596 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:29:36.915603 kernel: efivars: Registered efivars operations Feb 13 15:29:36.915611 kernel: vgaarb: loaded Feb 13 15:29:36.915618 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:29:36.915626 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:29:36.915634 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:29:36.915641 kernel: pnp: PnP ACPI init Feb 13 15:29:36.915716 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:29:36.915729 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:29:36.915737 kernel: NET: Registered PF_INET protocol family Feb 13 15:29:36.915744 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:29:36.915752 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:29:36.915759 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:29:36.915767 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:29:36.915774 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:29:36.915782 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:29:36.915791 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:29:36.915799 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:29:36.915807 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:29:36.915879 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 15:29:36.915890 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:29:36.915898 kernel: kvm [1]: HYP mode not available Feb 13 15:29:36.915905 kernel: Initialise system trusted keyrings Feb 13 15:29:36.915913 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:29:36.915921 kernel: Key type asymmetric registered Feb 13 15:29:36.915930 kernel: Asymmetric key parser 'x509' registered Feb 13 15:29:36.915975 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:29:36.915984 kernel: io scheduler mq-deadline registered Feb 13 15:29:36.915991 kernel: io scheduler kyber registered Feb 13 15:29:36.915999 kernel: io scheduler bfq registered Feb 13 15:29:36.916007 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:29:36.916085 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 15:29:36.916153 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 15:29:36.916221 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:36.916289 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 15:29:36.916371 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 15:29:36.916437 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:36.916515 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 15:29:36.916581 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 15:29:36.916650 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:36.916723 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 15:29:36.916789 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 15:29:36.916852 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:36.916920 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 15:29:36.917006 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 15:29:36.917103 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:36.917174 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 15:29:36.917240 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 15:29:36.917304 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:36.917407 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 15:29:36.917475 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 15:29:36.917544 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:36.917613 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 15:29:36.917677 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 15:29:36.917741 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:36.917751 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 15:29:36.917815 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 15:29:36.917887 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 15:29:36.917968 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:29:36.917979 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:29:36.917986 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:29:36.917994 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:29:36.918075 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 15:29:36.918150 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 15:29:36.918161 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:29:36.918172 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:29:36.918240 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 15:29:36.918250 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 15:29:36.918258 kernel: thunder_xcv, ver 1.0 Feb 13 15:29:36.918265 kernel: thunder_bgx, ver 1.0 Feb 13 15:29:36.918273 kernel: nicpf, ver 1.0 Feb 13 15:29:36.918280 kernel: nicvf, ver 1.0 Feb 13 15:29:36.919814 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:29:36.919909 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:29:36 UTC (1739460576) Feb 13 15:29:36.919920 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:29:36.919928 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:29:36.919947 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:29:36.919957 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:29:36.919965 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:29:36.919972 kernel: Segment Routing with IPv6 Feb 13 15:29:36.919979 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:29:36.919987 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:29:36.919997 kernel: Key type dns_resolver registered Feb 13 15:29:36.920004 kernel: registered taskstats version 1 Feb 13 15:29:36.920012 kernel: Loading compiled-in X.509 certificates Feb 13 15:29:36.920019 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e' Feb 13 15:29:36.920027 kernel: Key type .fscrypt registered Feb 13 15:29:36.920034 kernel: Key type fscrypt-provisioning registered Feb 13 15:29:36.920042 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:29:36.920050 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:29:36.920057 kernel: ima: No architecture policies found Feb 13 15:29:36.920066 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:29:36.920074 kernel: clk: Disabling unused clocks Feb 13 15:29:36.920081 kernel: Freeing unused kernel memory: 39936K Feb 13 15:29:36.920088 kernel: Run /init as init process Feb 13 15:29:36.920096 kernel: with arguments: Feb 13 15:29:36.920103 kernel: /init Feb 13 15:29:36.920110 kernel: with environment: Feb 13 15:29:36.920117 kernel: HOME=/ Feb 13 15:29:36.920126 kernel: TERM=linux Feb 13 15:29:36.920136 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:29:36.920146 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:29:36.920156 systemd[1]: Detected virtualization kvm. Feb 13 15:29:36.920164 systemd[1]: Detected architecture arm64. Feb 13 15:29:36.920172 systemd[1]: Running in initrd. Feb 13 15:29:36.920180 systemd[1]: No hostname configured, using default hostname. Feb 13 15:29:36.920188 systemd[1]: Hostname set to . Feb 13 15:29:36.920197 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:29:36.920205 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:29:36.920214 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:29:36.920222 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:29:36.920231 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:29:36.920239 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:29:36.920247 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:29:36.920256 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:29:36.920267 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:29:36.920276 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:29:36.920284 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:29:36.920292 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:29:36.920301 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:29:36.920309 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:29:36.920317 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:29:36.921510 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:29:36.921520 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:29:36.921531 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:29:36.921541 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:29:36.921551 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:29:36.921560 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:29:36.921569 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:29:36.921577 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:29:36.921585 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:29:36.921595 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:29:36.921604 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:29:36.921612 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:29:36.921620 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:29:36.921628 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:29:36.921636 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:29:36.921644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:29:36.921652 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:29:36.921661 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:29:36.921670 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:29:36.921715 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 15:29:36.921740 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:29:36.921749 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:36.921757 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:29:36.921767 systemd-journald[237]: Journal started Feb 13 15:29:36.921793 systemd-journald[237]: Runtime Journal (/run/log/journal/a4b3e4b571a24e8e8820ebbf268d841e) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:29:36.906001 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 15:29:36.925178 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:29:36.925201 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:29:36.926691 kernel: Bridge firewalling registered Feb 13 15:29:36.926375 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 15:29:36.927614 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:29:36.940618 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:29:36.944563 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:29:36.948648 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:29:36.952545 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:29:36.967624 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:29:36.972895 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:29:36.977687 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:29:36.984615 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:29:36.990583 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:29:36.992522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:29:36.996919 dracut-cmdline[270]: dracut-dracut-053 Feb 13 15:29:36.999341 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:29:37.033430 systemd-resolved[272]: Positive Trust Anchors: Feb 13 15:29:37.034090 systemd-resolved[272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:29:37.034124 systemd-resolved[272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:29:37.043629 systemd-resolved[272]: Defaulting to hostname 'linux'. Feb 13 15:29:37.045390 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:29:37.046642 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:29:37.083352 kernel: SCSI subsystem initialized Feb 13 15:29:37.088406 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:29:37.096375 kernel: iscsi: registered transport (tcp) Feb 13 15:29:37.109384 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:29:37.109458 kernel: QLogic iSCSI HBA Driver Feb 13 15:29:37.161298 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:29:37.165581 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:29:37.198519 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:29:37.198594 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:29:37.199388 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:29:37.253396 kernel: raid6: neonx8 gen() 15653 MB/s Feb 13 15:29:37.270389 kernel: raid6: neonx4 gen() 15669 MB/s Feb 13 15:29:37.287375 kernel: raid6: neonx2 gen() 12561 MB/s Feb 13 15:29:37.304376 kernel: raid6: neonx1 gen() 10423 MB/s Feb 13 15:29:37.321423 kernel: raid6: int64x8 gen() 6726 MB/s Feb 13 15:29:37.338380 kernel: raid6: int64x4 gen() 7296 MB/s Feb 13 15:29:37.355562 kernel: raid6: int64x2 gen() 6077 MB/s Feb 13 15:29:37.372482 kernel: raid6: int64x1 gen() 5017 MB/s Feb 13 15:29:37.372590 kernel: raid6: using algorithm neonx4 gen() 15669 MB/s Feb 13 15:29:37.389378 kernel: raid6: .... xor() 12334 MB/s, rmw enabled Feb 13 15:29:37.389496 kernel: raid6: using neon recovery algorithm Feb 13 15:29:37.396183 kernel: xor: measuring software checksum speed Feb 13 15:29:37.396384 kernel: 8regs : 21613 MB/sec Feb 13 15:29:37.396399 kernel: 32regs : 18130 MB/sec Feb 13 15:29:37.396415 kernel: arm64_neon : 24338 MB/sec Feb 13 15:29:37.396425 kernel: xor: using function: arm64_neon (24338 MB/sec) Feb 13 15:29:37.447406 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:29:37.463498 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:29:37.470574 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:29:37.498870 systemd-udevd[455]: Using default interface naming scheme 'v255'. Feb 13 15:29:37.502457 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:29:37.512129 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:29:37.530973 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Feb 13 15:29:37.574206 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:29:37.582707 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:29:37.634062 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:29:37.644625 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:29:37.664753 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:29:37.668494 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:29:37.670025 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:29:37.671792 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:29:37.679802 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:29:37.694264 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:29:37.750427 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:29:37.758160 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:29:37.759431 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:29:37.759483 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 15:29:37.758302 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:29:37.762817 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:29:37.766369 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:29:37.766494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:37.767806 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:29:37.773690 kernel: ACPI: bus type USB registered Feb 13 15:29:37.773729 kernel: usbcore: registered new interface driver usbfs Feb 13 15:29:37.773747 kernel: usbcore: registered new interface driver hub Feb 13 15:29:37.773764 kernel: usbcore: registered new device driver usb Feb 13 15:29:37.778640 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:29:37.791569 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 15:29:37.794803 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 15:29:37.794924 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:29:37.794954 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:29:37.800795 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:37.810616 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:29:37.822469 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 15:29:37.822595 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 15:29:37.822683 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:29:37.822768 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 15:29:37.822848 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 15:29:37.822926 kernel: hub 1-0:1.0: USB hub found Feb 13 15:29:37.823055 kernel: hub 1-0:1.0: 4 ports detected Feb 13 15:29:37.823142 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 15:29:37.823282 kernel: hub 2-0:1.0: USB hub found Feb 13 15:29:37.823398 kernel: hub 2-0:1.0: 4 ports detected Feb 13 15:29:37.811604 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:29:37.840013 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 15:29:37.851434 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 15:29:37.851590 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 15:29:37.851673 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 15:29:37.851754 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:29:37.851831 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:29:37.851841 kernel: GPT:17805311 != 80003071 Feb 13 15:29:37.851850 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:29:37.851859 kernel: GPT:17805311 != 80003071 Feb 13 15:29:37.851867 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:29:37.851876 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:29:37.851887 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 15:29:37.849242 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:29:37.892343 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (516) Feb 13 15:29:37.899357 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (510) Feb 13 15:29:37.899435 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 15:29:37.915806 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 15:29:37.920169 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 15:29:37.921559 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 15:29:37.929706 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:29:37.936516 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:29:37.961239 disk-uuid[576]: Primary Header is updated. Feb 13 15:29:37.961239 disk-uuid[576]: Secondary Entries is updated. Feb 13 15:29:37.961239 disk-uuid[576]: Secondary Header is updated. Feb 13 15:29:37.968356 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:29:38.062379 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 15:29:38.305412 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 15:29:38.439363 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 15:29:38.439430 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 15:29:38.440755 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 15:29:38.495106 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 15:29:38.495349 kernel: usbcore: registered new interface driver usbhid Feb 13 15:29:38.495361 kernel: usbhid: USB HID core driver Feb 13 15:29:38.985399 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:29:38.988158 disk-uuid[577]: The operation has completed successfully. Feb 13 15:29:39.048976 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:29:39.049111 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:29:39.081665 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:29:39.087237 sh[592]: Success Feb 13 15:29:39.102389 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:29:39.162233 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:29:39.170511 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:29:39.173366 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:29:39.194455 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f Feb 13 15:29:39.194518 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:29:39.194538 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:29:39.195403 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:29:39.195434 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:29:39.204367 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:29:39.206874 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:29:39.208672 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:29:39.215525 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:29:39.218229 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:29:39.229524 kernel: BTRFS info (device sda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:29:39.229599 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:29:39.229630 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:29:39.233488 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:29:39.233553 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:29:39.247888 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:29:39.248521 kernel: BTRFS info (device sda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:29:39.256679 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:29:39.262605 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:29:39.364224 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:29:39.372692 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:29:39.372973 ignition[674]: Ignition 2.20.0 Feb 13 15:29:39.372980 ignition[674]: Stage: fetch-offline Feb 13 15:29:39.373023 ignition[674]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:39.376124 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:29:39.373031 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:39.373199 ignition[674]: parsed url from cmdline: "" Feb 13 15:29:39.373203 ignition[674]: no config URL provided Feb 13 15:29:39.373208 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:29:39.373215 ignition[674]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:29:39.373221 ignition[674]: failed to fetch config: resource requires networking Feb 13 15:29:39.373621 ignition[674]: Ignition finished successfully Feb 13 15:29:39.402094 systemd-networkd[779]: lo: Link UP Feb 13 15:29:39.402111 systemd-networkd[779]: lo: Gained carrier Feb 13 15:29:39.403965 systemd-networkd[779]: Enumeration completed Feb 13 15:29:39.404508 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:39.404511 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:29:39.405409 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:29:39.406082 systemd[1]: Reached target network.target - Network. Feb 13 15:29:39.407019 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:39.407022 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:29:39.407817 systemd-networkd[779]: eth0: Link UP Feb 13 15:29:39.407820 systemd-networkd[779]: eth0: Gained carrier Feb 13 15:29:39.407830 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:39.414684 systemd-networkd[779]: eth1: Link UP Feb 13 15:29:39.414688 systemd-networkd[779]: eth1: Gained carrier Feb 13 15:29:39.414699 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:39.414997 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:29:39.430603 ignition[783]: Ignition 2.20.0 Feb 13 15:29:39.430614 ignition[783]: Stage: fetch Feb 13 15:29:39.430800 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:39.430810 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:39.430901 ignition[783]: parsed url from cmdline: "" Feb 13 15:29:39.430904 ignition[783]: no config URL provided Feb 13 15:29:39.430909 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:29:39.430916 ignition[783]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:29:39.431050 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 15:29:39.432004 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 15:29:39.443509 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:29:39.476481 systemd-networkd[779]: eth0: DHCPv4 address 142.132.179.183/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:29:39.633072 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 15:29:39.636610 ignition[783]: GET result: OK Feb 13 15:29:39.636716 ignition[783]: parsing config with SHA512: d214380b2d99ddf214d649e870c16b36597bbb6ae742369b7c2dfbcbae8c2d4baa6c1d2804bb02e3dbb6689c2bb987872f2a61f395caf998caf4503fb96324cd Feb 13 15:29:39.643705 unknown[783]: fetched base config from "system" Feb 13 15:29:39.643718 unknown[783]: fetched base config from "system" Feb 13 15:29:39.643791 unknown[783]: fetched user config from "hetzner" Feb 13 15:29:39.645532 ignition[783]: fetch: fetch complete Feb 13 15:29:39.645540 ignition[783]: fetch: fetch passed Feb 13 15:29:39.645612 ignition[783]: Ignition finished successfully Feb 13 15:29:39.648024 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:29:39.654531 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:29:39.682752 ignition[791]: Ignition 2.20.0 Feb 13 15:29:39.682764 ignition[791]: Stage: kargs Feb 13 15:29:39.683009 ignition[791]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:39.683021 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:39.684029 ignition[791]: kargs: kargs passed Feb 13 15:29:39.684091 ignition[791]: Ignition finished successfully Feb 13 15:29:39.687370 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:29:39.691584 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:29:39.709795 ignition[797]: Ignition 2.20.0 Feb 13 15:29:39.709812 ignition[797]: Stage: disks Feb 13 15:29:39.710125 ignition[797]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:39.710142 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:39.712857 ignition[797]: disks: disks passed Feb 13 15:29:39.712954 ignition[797]: Ignition finished successfully Feb 13 15:29:39.715602 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:29:39.716813 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:29:39.717829 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:29:39.719245 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:29:39.720708 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:29:39.722887 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:29:39.740649 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:29:39.762366 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:29:39.766563 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:29:39.771571 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:29:39.821551 kernel: EXT4-fs (sda9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none. Feb 13 15:29:39.822517 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:29:39.823740 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:29:39.833529 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:29:39.838345 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:29:39.840881 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:29:39.843455 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:29:39.843535 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:29:39.854274 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:29:39.861623 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:29:39.865379 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (814) Feb 13 15:29:39.871627 kernel: BTRFS info (device sda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:29:39.871699 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:29:39.873454 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:29:39.884366 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:29:39.884449 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:29:39.888183 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:29:39.925794 coreos-metadata[816]: Feb 13 15:29:39.924 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 15:29:39.928642 coreos-metadata[816]: Feb 13 15:29:39.928 INFO Fetch successful Feb 13 15:29:39.929385 coreos-metadata[816]: Feb 13 15:29:39.929 INFO wrote hostname ci-4186-1-1-6-ce8ef0549e to /sysroot/etc/hostname Feb 13 15:29:39.932315 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:29:39.936482 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:29:39.942613 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:29:39.948641 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:29:39.954317 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:29:40.065400 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:29:40.070498 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:29:40.073252 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:29:40.083359 kernel: BTRFS info (device sda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:29:40.107387 ignition[931]: INFO : Ignition 2.20.0 Feb 13 15:29:40.107387 ignition[931]: INFO : Stage: mount Feb 13 15:29:40.107387 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:40.107387 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:40.110515 ignition[931]: INFO : mount: mount passed Feb 13 15:29:40.111966 ignition[931]: INFO : Ignition finished successfully Feb 13 15:29:40.113144 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:29:40.117396 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:29:40.122680 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:29:40.194050 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:29:40.202528 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:29:40.214366 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (943) Feb 13 15:29:40.215795 kernel: BTRFS info (device sda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:29:40.215855 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:29:40.215870 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:29:40.219359 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:29:40.219421 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:29:40.222527 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:29:40.243071 ignition[960]: INFO : Ignition 2.20.0 Feb 13 15:29:40.243071 ignition[960]: INFO : Stage: files Feb 13 15:29:40.244233 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:40.244233 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:40.246839 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:29:40.246839 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:29:40.246839 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:29:40.251733 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:29:40.252578 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:29:40.252578 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:29:40.252164 unknown[960]: wrote ssh authorized keys file for user: core Feb 13 15:29:40.254853 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:29:40.254853 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:29:40.341550 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:29:40.938601 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:29:40.938601 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:29:40.942662 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:29:41.193688 systemd-networkd[779]: eth0: Gained IPv6LL Feb 13 15:29:41.194848 systemd-networkd[779]: eth1: Gained IPv6LL Feb 13 15:29:41.526252 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:29:41.854024 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:29:41.854024 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:29:41.856725 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:29:41.857884 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:29:41.857884 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:29:41.857884 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:29:41.857884 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:29:41.857884 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:29:41.857884 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:29:41.857884 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:29:41.857884 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:29:41.857884 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:29:41.857884 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:29:41.857884 ignition[960]: INFO : files: files passed Feb 13 15:29:41.857884 ignition[960]: INFO : Ignition finished successfully Feb 13 15:29:41.859343 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:29:41.868908 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:29:41.870505 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:29:41.873745 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:29:41.874096 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:29:41.889628 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:29:41.889628 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:29:41.891968 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:29:41.893053 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:29:41.894561 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:29:41.901606 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:29:41.935342 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:29:41.936295 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:29:41.937793 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:29:41.939107 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:29:41.941046 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:29:41.946510 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:29:41.962797 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:29:41.968605 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:29:41.983054 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:29:41.983786 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:29:41.985166 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:29:41.986165 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:29:41.986287 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:29:41.987541 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:29:41.988112 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:29:41.989109 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:29:41.990101 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:29:41.991017 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:29:41.992089 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:29:41.993125 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:29:41.994106 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:29:41.994988 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:29:41.995915 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:29:41.996682 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:29:41.996846 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:29:41.997909 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:29:41.998975 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:29:41.999879 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:29:42.000002 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:29:42.000922 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:29:42.001098 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:29:42.002371 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:29:42.002526 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:29:42.003746 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:29:42.003892 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:29:42.004596 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:29:42.004732 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:29:42.015497 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:29:42.018684 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:29:42.019170 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:29:42.019413 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:29:42.022692 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:29:42.022860 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:29:42.034403 ignition[1012]: INFO : Ignition 2.20.0 Feb 13 15:29:42.034403 ignition[1012]: INFO : Stage: umount Feb 13 15:29:42.034403 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:29:42.034403 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:29:42.039545 ignition[1012]: INFO : umount: umount passed Feb 13 15:29:42.039545 ignition[1012]: INFO : Ignition finished successfully Feb 13 15:29:42.036574 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:29:42.036714 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:29:42.040883 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:29:42.042695 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:29:42.047525 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:29:42.048797 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:29:42.051044 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:29:42.051127 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:29:42.053570 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:29:42.053622 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:29:42.054193 systemd[1]: Stopped target network.target - Network. Feb 13 15:29:42.055407 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:29:42.055468 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:29:42.056408 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:29:42.057114 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:29:42.060376 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:29:42.062147 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:29:42.062817 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:29:42.063912 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:29:42.063999 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:29:42.065131 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:29:42.065173 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:29:42.066260 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:29:42.066311 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:29:42.067314 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:29:42.067376 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:29:42.068467 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:29:42.069284 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:29:42.071727 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:29:42.072316 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:29:42.072680 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:29:42.074112 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:29:42.074223 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:29:42.075399 systemd-networkd[779]: eth0: DHCPv6 lease lost Feb 13 15:29:42.077149 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:29:42.077288 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:29:42.081002 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:29:42.081074 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:29:42.081436 systemd-networkd[779]: eth1: DHCPv6 lease lost Feb 13 15:29:42.083005 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:29:42.084406 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:29:42.086224 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:29:42.086889 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:29:42.097443 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:29:42.098107 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:29:42.098199 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:29:42.100033 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:29:42.100086 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:29:42.102039 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:29:42.102089 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:29:42.103563 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:29:42.116998 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:29:42.117129 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:29:42.121079 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:29:42.121250 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:29:42.122735 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:29:42.122782 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:29:42.123703 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:29:42.123740 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:29:42.124786 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:29:42.124838 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:29:42.126582 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:29:42.126636 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:29:42.128270 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:29:42.128328 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:29:42.142187 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:29:42.143666 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:29:42.143790 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:29:42.145288 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:29:42.145351 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:29:42.150029 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:29:42.150098 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:29:42.152874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:29:42.152947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:42.154827 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:29:42.155562 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:29:42.157258 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:29:42.163572 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:29:42.175087 systemd[1]: Switching root. Feb 13 15:29:42.202308 systemd-journald[237]: Journal stopped Feb 13 15:29:43.192401 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 15:29:43.192467 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:29:43.192479 kernel: SELinux: policy capability open_perms=1 Feb 13 15:29:43.192489 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:29:43.192502 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:29:43.192513 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:29:43.192523 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:29:43.192532 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:29:43.192542 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:29:43.192555 kernel: audit: type=1403 audit(1739460582.421:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:29:43.192565 systemd[1]: Successfully loaded SELinux policy in 37.944ms. Feb 13 15:29:43.192588 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.696ms. Feb 13 15:29:43.192601 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:29:43.192612 systemd[1]: Detected virtualization kvm. Feb 13 15:29:43.192622 systemd[1]: Detected architecture arm64. Feb 13 15:29:43.192632 systemd[1]: Detected first boot. Feb 13 15:29:43.192642 systemd[1]: Hostname set to . Feb 13 15:29:43.192653 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:29:43.192664 zram_generator::config[1054]: No configuration found. Feb 13 15:29:43.192675 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:29:43.192687 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:29:43.192697 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:29:43.192707 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:29:43.192720 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:29:43.192733 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:29:43.192743 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:29:43.192753 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:29:43.192764 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:29:43.192777 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:29:43.192787 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:29:43.192797 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:29:43.192808 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:29:43.192819 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:29:43.192831 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:29:43.192843 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:29:43.192854 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:29:43.192865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:29:43.192877 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:29:43.192888 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:29:43.192898 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:29:43.192912 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:29:43.192957 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:29:43.192975 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:29:43.192989 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:29:43.193000 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:29:43.193011 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:29:43.193022 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:29:43.193033 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:29:43.193044 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:29:43.193054 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:29:43.193067 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:29:43.193078 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:29:43.193089 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:29:43.193101 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:29:43.193112 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:29:43.193122 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:29:43.193132 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:29:43.193143 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:29:43.193153 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:29:43.193171 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:29:43.193184 systemd[1]: Reached target machines.target - Containers. Feb 13 15:29:43.193197 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:29:43.193210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:29:43.193221 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:29:43.193235 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:29:43.193247 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:29:43.193259 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:29:43.193272 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:29:43.193284 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:29:43.193295 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:29:43.193308 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:29:43.193333 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:29:43.193346 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:29:43.193357 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:29:43.193368 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:29:43.193381 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:29:43.193392 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:29:43.193403 kernel: loop: module loaded Feb 13 15:29:43.193413 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:29:43.193423 kernel: ACPI: bus type drm_connector registered Feb 13 15:29:43.193434 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:29:43.193444 kernel: fuse: init (API version 7.39) Feb 13 15:29:43.193454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:29:43.193465 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:29:43.193475 systemd[1]: Stopped verity-setup.service. Feb 13 15:29:43.193488 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:29:43.193499 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:29:43.193511 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:29:43.193521 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:29:43.193534 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:29:43.193544 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:29:43.193555 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:29:43.193565 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:29:43.193576 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:29:43.193587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:29:43.193598 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:29:43.193609 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:29:43.193619 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:29:43.193633 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:29:43.193649 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:29:43.193660 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:29:43.193703 systemd-journald[1121]: Collecting audit messages is disabled. Feb 13 15:29:43.193731 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:29:43.193744 systemd-journald[1121]: Journal started Feb 13 15:29:43.193775 systemd-journald[1121]: Runtime Journal (/run/log/journal/a4b3e4b571a24e8e8820ebbf268d841e) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:29:42.904625 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:29:42.929705 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:29:42.930433 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:29:43.195704 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:29:43.197768 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:29:43.198475 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:29:43.201030 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:29:43.203012 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:29:43.205810 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:29:43.224909 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:29:43.233610 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:29:43.242370 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:29:43.245376 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:29:43.245424 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:29:43.250821 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:29:43.270739 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:29:43.287635 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:29:43.288917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:29:43.291904 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:29:43.295597 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:29:43.296525 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:29:43.299592 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:29:43.301507 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:29:43.304673 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:29:43.309999 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:29:43.321662 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:29:43.325369 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:29:43.327815 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:29:43.329697 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:29:43.334366 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:29:43.356555 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:29:43.357317 systemd-journald[1121]: Time spent on flushing to /var/log/journal/a4b3e4b571a24e8e8820ebbf268d841e is 76.153ms for 1131 entries. Feb 13 15:29:43.357317 systemd-journald[1121]: System Journal (/var/log/journal/a4b3e4b571a24e8e8820ebbf268d841e) is 8.0M, max 584.8M, 576.8M free. Feb 13 15:29:43.448046 systemd-journald[1121]: Received client request to flush runtime journal. Feb 13 15:29:43.448138 kernel: loop0: detected capacity change from 0 to 113552 Feb 13 15:29:43.448154 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:29:43.377583 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:29:43.391541 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:29:43.392412 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:29:43.398730 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:29:43.422507 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:29:43.456793 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:29:43.459447 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:29:43.466734 kernel: loop1: detected capacity change from 0 to 8 Feb 13 15:29:43.473395 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Feb 13 15:29:43.473989 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Feb 13 15:29:43.479801 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:29:43.485257 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:29:43.487688 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:29:43.490724 kernel: loop2: detected capacity change from 0 to 194512 Feb 13 15:29:43.500553 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:29:43.543272 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:29:43.558584 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:29:43.562352 kernel: loop3: detected capacity change from 0 to 116784 Feb 13 15:29:43.584236 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 15:29:43.584629 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 15:29:43.591375 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:29:43.597356 kernel: loop4: detected capacity change from 0 to 113552 Feb 13 15:29:43.618496 kernel: loop5: detected capacity change from 0 to 8 Feb 13 15:29:43.621173 kernel: loop6: detected capacity change from 0 to 194512 Feb 13 15:29:43.643367 kernel: loop7: detected capacity change from 0 to 116784 Feb 13 15:29:43.653920 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 15:29:43.654422 (sd-merge)[1196]: Merged extensions into '/usr'. Feb 13 15:29:43.661058 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:29:43.661372 systemd[1]: Reloading... Feb 13 15:29:43.829485 zram_generator::config[1226]: No configuration found. Feb 13 15:29:43.968559 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:29:43.971113 ldconfig[1162]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:29:44.017487 systemd[1]: Reloading finished in 355 ms. Feb 13 15:29:44.041389 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:29:44.042801 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:29:44.054723 systemd[1]: Starting ensure-sysext.service... Feb 13 15:29:44.058981 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:29:44.078442 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:29:44.078478 systemd[1]: Reloading... Feb 13 15:29:44.113766 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:29:44.114482 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:29:44.115401 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:29:44.115839 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Feb 13 15:29:44.116011 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Feb 13 15:29:44.123271 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:29:44.124461 systemd-tmpfiles[1261]: Skipping /boot Feb 13 15:29:44.140232 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:29:44.140430 systemd-tmpfiles[1261]: Skipping /boot Feb 13 15:29:44.192353 zram_generator::config[1287]: No configuration found. Feb 13 15:29:44.313858 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:29:44.362788 systemd[1]: Reloading finished in 283 ms. Feb 13 15:29:44.380759 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:29:44.388357 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:29:44.407110 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:29:44.414484 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:29:44.419042 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:29:44.427693 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:29:44.432034 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:29:44.443382 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:29:44.447130 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:29:44.462207 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:29:44.466918 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:29:44.473659 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:29:44.474953 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:29:44.477132 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:29:44.477337 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:29:44.483215 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:29:44.500012 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:29:44.505735 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:29:44.512680 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:29:44.522766 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:29:44.523764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:29:44.525273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:29:44.525589 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:29:44.527298 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:29:44.529409 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:29:44.532895 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:29:44.542595 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:29:44.551840 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Feb 13 15:29:44.555679 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:29:44.557373 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:29:44.559815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:29:44.562796 augenrules[1363]: No rules Feb 13 15:29:44.565698 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:29:44.576004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:29:44.582772 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:29:44.583793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:29:44.584762 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:29:44.584972 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:29:44.592350 systemd[1]: Finished ensure-sysext.service. Feb 13 15:29:44.593303 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:29:44.593580 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:29:44.594622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:29:44.594774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:29:44.598102 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:29:44.606647 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:29:44.614007 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:29:44.615634 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:29:44.617789 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:29:44.632196 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:29:44.634118 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:29:44.638382 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:29:44.657834 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:29:44.658452 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:29:44.768029 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:29:44.797752 systemd-networkd[1390]: lo: Link UP Feb 13 15:29:44.797762 systemd-networkd[1390]: lo: Gained carrier Feb 13 15:29:44.800751 systemd-networkd[1390]: Enumeration completed Feb 13 15:29:44.801562 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:29:44.807607 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:29:44.808488 systemd-resolved[1335]: Positive Trust Anchors: Feb 13 15:29:44.808509 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:29:44.808545 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:29:44.809088 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:29:44.810554 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:29:44.816001 systemd-resolved[1335]: Using system hostname 'ci-4186-1-1-6-ce8ef0549e'. Feb 13 15:29:44.834977 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:29:44.835955 systemd[1]: Reached target network.target - Network. Feb 13 15:29:44.837539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:29:44.908414 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:44.908579 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:29:44.910553 systemd-networkd[1390]: eth0: Link UP Feb 13 15:29:44.911023 systemd-networkd[1390]: eth0: Gained carrier Feb 13 15:29:44.911051 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:44.927777 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1399) Feb 13 15:29:44.931393 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:29:44.945011 systemd-networkd[1390]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:44.945024 systemd-networkd[1390]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:29:44.947498 systemd-networkd[1390]: eth1: Link UP Feb 13 15:29:44.947620 systemd-networkd[1390]: eth1: Gained carrier Feb 13 15:29:44.947646 systemd-networkd[1390]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:29:44.957650 systemd-networkd[1390]: eth0: DHCPv4 address 142.132.179.183/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:29:44.959119 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Feb 13 15:29:44.986447 systemd-networkd[1390]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:29:44.990040 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 15:29:44.990180 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:29:45.006854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:29:45.012027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:29:45.017613 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:29:45.018259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:29:45.018305 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:29:45.018737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:29:45.018890 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:29:45.035890 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:29:45.040431 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 15:29:45.045586 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:29:45.048798 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:29:45.050391 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 15:29:45.050500 kernel: [drm] features: -context_init Feb 13 15:29:45.049064 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:29:45.051852 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:29:45.057071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:29:45.057292 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:29:45.058820 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:29:45.066506 kernel: [drm] number of scanouts: 1 Feb 13 15:29:45.066584 kernel: [drm] number of cap sets: 0 Feb 13 15:29:45.485008 systemd-resolved[1335]: Clock change detected. Flushing caches. Feb 13 15:29:45.485172 systemd-timesyncd[1376]: Contacted time server 148.251.5.46:123 (0.flatcar.pool.ntp.org). Feb 13 15:29:45.485373 systemd-timesyncd[1376]: Initial clock synchronization to Thu 2025-02-13 15:29:45.484857 UTC. Feb 13 15:29:45.493675 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 15:29:45.497788 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:29:45.505451 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:29:45.513999 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 15:29:45.516783 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:29:45.535984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:29:45.536406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:45.543674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:29:45.627755 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:29:45.658251 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:29:45.664766 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:29:45.682721 lvm[1449]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:29:45.712961 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:29:45.715151 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:29:45.716854 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:29:45.718686 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:29:45.720014 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:29:45.721632 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:29:45.722783 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:29:45.724082 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:29:45.725405 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:29:45.725467 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:29:45.726002 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:29:45.728760 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:29:45.731962 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:29:45.737924 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:29:45.740784 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:29:45.742971 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:29:45.743798 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:29:45.744445 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:29:45.745222 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:29:45.745258 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:29:45.752918 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:29:45.758540 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:29:45.763638 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:29:45.769379 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:29:45.774576 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:29:45.778155 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:29:45.779554 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:29:45.782618 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:29:45.786583 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:29:45.792792 jq[1457]: false Feb 13 15:29:45.801651 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 15:29:45.805583 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:29:45.812599 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:29:45.821560 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:29:45.825594 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:29:45.826301 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:29:45.830597 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:29:45.834577 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:29:45.842393 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:29:45.842577 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:29:45.847880 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:29:45.849437 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:29:45.859337 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:29:45.869886 jq[1469]: true Feb 13 15:29:45.875553 dbus-daemon[1456]: [system] SELinux support is enabled Feb 13 15:29:45.877343 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:29:45.877831 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:29:45.879656 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:29:45.897490 extend-filesystems[1458]: Found loop4 Feb 13 15:29:45.897490 extend-filesystems[1458]: Found loop5 Feb 13 15:29:45.897490 extend-filesystems[1458]: Found loop6 Feb 13 15:29:45.897490 extend-filesystems[1458]: Found loop7 Feb 13 15:29:45.897490 extend-filesystems[1458]: Found sda Feb 13 15:29:45.897490 extend-filesystems[1458]: Found sda1 Feb 13 15:29:45.897490 extend-filesystems[1458]: Found sda2 Feb 13 15:29:45.897490 extend-filesystems[1458]: Found sda3 Feb 13 15:29:45.897490 extend-filesystems[1458]: Found usr Feb 13 15:29:45.897490 extend-filesystems[1458]: Found sda4 Feb 13 15:29:45.897490 extend-filesystems[1458]: Found sda6 Feb 13 15:29:45.897490 extend-filesystems[1458]: Found sda7 Feb 13 15:29:45.897490 extend-filesystems[1458]: Found sda9 Feb 13 15:29:45.897490 extend-filesystems[1458]: Checking size of /dev/sda9 Feb 13 15:29:45.902880 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:29:45.938671 jq[1484]: true Feb 13 15:29:45.943704 extend-filesystems[1458]: Resized partition /dev/sda9 Feb 13 15:29:45.902932 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:29:45.944310 update_engine[1468]: I20250213 15:29:45.942111 1468 main.cc:92] Flatcar Update Engine starting Feb 13 15:29:45.907233 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:29:45.907258 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:29:45.948529 tar[1475]: linux-arm64/helm Feb 13 15:29:45.948789 extend-filesystems[1498]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:29:45.945745 (ntainerd)[1492]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:29:45.952755 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 15:29:45.958792 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:29:45.959243 update_engine[1468]: I20250213 15:29:45.958945 1468 update_check_scheduler.cc:74] Next update check in 9m41s Feb 13 15:29:45.960569 coreos-metadata[1455]: Feb 13 15:29:45.960 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 15:29:45.964626 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:29:45.968592 coreos-metadata[1455]: Feb 13 15:29:45.968 INFO Fetch successful Feb 13 15:29:45.968592 coreos-metadata[1455]: Feb 13 15:29:45.968 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 15:29:45.970869 coreos-metadata[1455]: Feb 13 15:29:45.970 INFO Fetch successful Feb 13 15:29:46.023075 systemd-logind[1466]: New seat seat0. Feb 13 15:29:46.026780 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:29:46.026797 systemd-logind[1466]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 15:29:46.034478 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:29:46.144025 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:29:46.178086 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1396) Feb 13 15:29:46.178160 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 15:29:46.154096 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:29:46.166407 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:29:46.167611 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:29:46.172703 systemd[1]: Starting sshkeys.service... Feb 13 15:29:46.185095 extend-filesystems[1498]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:29:46.185095 extend-filesystems[1498]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 15:29:46.185095 extend-filesystems[1498]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 15:29:46.190488 extend-filesystems[1458]: Resized filesystem in /dev/sda9 Feb 13 15:29:46.190488 extend-filesystems[1458]: Found sr0 Feb 13 15:29:46.185734 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:29:46.186055 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:29:46.261023 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:29:46.274764 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:29:46.327528 coreos-metadata[1536]: Feb 13 15:29:46.327 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 15:29:46.333578 coreos-metadata[1536]: Feb 13 15:29:46.331 INFO Fetch successful Feb 13 15:29:46.336571 unknown[1536]: wrote ssh authorized keys file for user: core Feb 13 15:29:46.379084 update-ssh-keys[1542]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:29:46.381095 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:29:46.387939 systemd[1]: Finished sshkeys.service. Feb 13 15:29:46.393367 containerd[1492]: time="2025-02-13T15:29:46.391616547Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:29:46.456187 containerd[1492]: time="2025-02-13T15:29:46.456120627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:46.464605 containerd[1492]: time="2025-02-13T15:29:46.464542827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:29:46.464605 containerd[1492]: time="2025-02-13T15:29:46.464593707Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:29:46.464605 containerd[1492]: time="2025-02-13T15:29:46.464612667Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:29:46.464805 containerd[1492]: time="2025-02-13T15:29:46.464780987Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:29:46.464837 containerd[1492]: time="2025-02-13T15:29:46.464806667Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:46.464893 containerd[1492]: time="2025-02-13T15:29:46.464874067Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:29:46.464916 containerd[1492]: time="2025-02-13T15:29:46.464893747Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:46.465112 containerd[1492]: time="2025-02-13T15:29:46.465083947Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:29:46.465112 containerd[1492]: time="2025-02-13T15:29:46.465108867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:46.465173 containerd[1492]: time="2025-02-13T15:29:46.465123107Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:29:46.465173 containerd[1492]: time="2025-02-13T15:29:46.465133347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:46.465268 containerd[1492]: time="2025-02-13T15:29:46.465246187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:46.467641 containerd[1492]: time="2025-02-13T15:29:46.467598227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:29:46.467773 containerd[1492]: time="2025-02-13T15:29:46.467752467Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:29:46.467773 containerd[1492]: time="2025-02-13T15:29:46.467771547Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:29:46.467899 containerd[1492]: time="2025-02-13T15:29:46.467876667Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:29:46.467982 containerd[1492]: time="2025-02-13T15:29:46.467936987Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:29:46.472530 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:29:46.491249 containerd[1492]: time="2025-02-13T15:29:46.490989587Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:29:46.491249 containerd[1492]: time="2025-02-13T15:29:46.491068947Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:29:46.491249 containerd[1492]: time="2025-02-13T15:29:46.491085547Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:29:46.491249 containerd[1492]: time="2025-02-13T15:29:46.491104107Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:29:46.491249 containerd[1492]: time="2025-02-13T15:29:46.491123827Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:29:46.491774 containerd[1492]: time="2025-02-13T15:29:46.491309467Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.493646267Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.493883347Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.493903427Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.493920667Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.493935467Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.493964307Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.493979907Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.493995107Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.494010787Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.494023907Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.494036947Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.494049427Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.494070947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496474 containerd[1492]: time="2025-02-13T15:29:46.494086067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494098547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494111867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494124187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494137947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494150907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494164627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494180067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494195867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494208707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494221187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494234387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494255187Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494283347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494296907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.496806 containerd[1492]: time="2025-02-13T15:29:46.494308747Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:29:46.497086 containerd[1492]: time="2025-02-13T15:29:46.494611867Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:29:46.497086 containerd[1492]: time="2025-02-13T15:29:46.494636027Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:29:46.497086 containerd[1492]: time="2025-02-13T15:29:46.494646827Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:29:46.497086 containerd[1492]: time="2025-02-13T15:29:46.494658507Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:29:46.497086 containerd[1492]: time="2025-02-13T15:29:46.494671067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.497086 containerd[1492]: time="2025-02-13T15:29:46.494684427Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:29:46.497086 containerd[1492]: time="2025-02-13T15:29:46.494698027Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:29:46.497086 containerd[1492]: time="2025-02-13T15:29:46.494708307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:29:46.497227 containerd[1492]: time="2025-02-13T15:29:46.495109947Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:29:46.497227 containerd[1492]: time="2025-02-13T15:29:46.495165747Z" level=info msg="Connect containerd service" Feb 13 15:29:46.497227 containerd[1492]: time="2025-02-13T15:29:46.495197867Z" level=info msg="using legacy CRI server" Feb 13 15:29:46.497227 containerd[1492]: time="2025-02-13T15:29:46.495205907Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:29:46.497580 containerd[1492]: time="2025-02-13T15:29:46.497537067Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:29:46.500597 containerd[1492]: time="2025-02-13T15:29:46.500543747Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:29:46.501168 containerd[1492]: time="2025-02-13T15:29:46.500812947Z" level=info msg="Start subscribing containerd event" Feb 13 15:29:46.501237 containerd[1492]: time="2025-02-13T15:29:46.501185107Z" level=info msg="Start recovering state" Feb 13 15:29:46.501501 containerd[1492]: time="2025-02-13T15:29:46.501267427Z" level=info msg="Start event monitor" Feb 13 15:29:46.501501 containerd[1492]: time="2025-02-13T15:29:46.501286587Z" level=info msg="Start snapshots syncer" Feb 13 15:29:46.501501 containerd[1492]: time="2025-02-13T15:29:46.501297467Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:29:46.501501 containerd[1492]: time="2025-02-13T15:29:46.501306907Z" level=info msg="Start streaming server" Feb 13 15:29:46.502886 containerd[1492]: time="2025-02-13T15:29:46.501808187Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:29:46.502886 containerd[1492]: time="2025-02-13T15:29:46.501859227Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:29:46.502085 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:29:46.506475 containerd[1492]: time="2025-02-13T15:29:46.506418227Z" level=info msg="containerd successfully booted in 0.119626s" Feb 13 15:29:46.700735 tar[1475]: linux-arm64/LICENSE Feb 13 15:29:46.701114 tar[1475]: linux-arm64/README.md Feb 13 15:29:46.717548 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:29:46.808917 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:29:46.833919 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:29:46.843870 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:29:46.854633 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:29:46.854880 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:29:46.863076 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:29:46.876004 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:29:46.884892 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:29:46.888400 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:29:46.889828 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:29:46.915595 systemd-networkd[1390]: eth0: Gained IPv6LL Feb 13 15:29:46.920026 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:29:46.921326 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:29:46.928682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:29:46.932584 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:29:46.972623 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:29:47.300383 systemd-networkd[1390]: eth1: Gained IPv6LL Feb 13 15:29:47.621231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:29:47.624041 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:29:47.627301 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:29:47.630049 systemd[1]: Startup finished in 801ms (kernel) + 5.731s (initrd) + 4.836s (userspace) = 11.369s. Feb 13 15:29:47.641628 agetty[1566]: failed to open credentials directory Feb 13 15:29:47.642061 agetty[1567]: failed to open credentials directory Feb 13 15:29:48.236443 kubelet[1584]: E0213 15:29:48.236309 1584 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:29:48.241910 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:29:48.242260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:29:58.492985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:29:58.503795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:29:58.619302 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:29:58.631829 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:29:58.689279 kubelet[1604]: E0213 15:29:58.689192 1604 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:29:58.694280 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:29:58.694579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:08.944877 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:30:08.956243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:09.090670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:09.097716 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:30:09.162009 kubelet[1621]: E0213 15:30:09.161951 1621 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:30:09.166053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:30:09.166231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:19.227946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:30:19.234758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:19.383956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:19.389648 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:30:19.447828 kubelet[1637]: E0213 15:30:19.447753 1637 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:30:19.450437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:30:19.450582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:27.031263 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:30:27.036858 systemd[1]: Started sshd@0-142.132.179.183:22-36.26.72.149:39556.service - OpenSSH per-connection server daemon (36.26.72.149:39556). Feb 13 15:30:29.478446 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:30:29.489627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:29.610249 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:29.620771 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:30:29.678020 kubelet[1655]: E0213 15:30:29.677907 1655 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:30:29.682022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:30:29.682169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:29.968851 sshd[1646]: Invalid user gh from 36.26.72.149 port 39556 Feb 13 15:30:30.214366 sshd[1646]: Received disconnect from 36.26.72.149 port 39556:11: Bye Bye [preauth] Feb 13 15:30:30.214366 sshd[1646]: Disconnected from invalid user gh 36.26.72.149 port 39556 [preauth] Feb 13 15:30:30.216438 systemd[1]: sshd@0-142.132.179.183:22-36.26.72.149:39556.service: Deactivated successfully. Feb 13 15:30:31.297534 update_engine[1468]: I20250213 15:30:31.296908 1468 update_attempter.cc:509] Updating boot flags... Feb 13 15:30:31.340388 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1675) Feb 13 15:30:39.727792 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:30:39.733678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:39.874164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:39.886108 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:30:39.940956 kubelet[1689]: E0213 15:30:39.940844 1689 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:30:39.944233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:30:39.944407 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:30:49.977908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:30:49.988721 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:30:50.107686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:30:50.110435 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:30:50.167736 kubelet[1705]: E0213 15:30:50.167685 1705 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:30:50.170220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:30:50.170513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:00.228313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:31:00.248133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:00.410649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:00.413499 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:00.469094 kubelet[1721]: E0213 15:31:00.469011 1721 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:00.472846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:00.473158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:10.478188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 15:31:10.494749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:10.609260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:10.615514 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:10.673488 kubelet[1737]: E0213 15:31:10.673408 1737 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:10.679239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:10.679722 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:20.728191 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 15:31:20.742871 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:20.895150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:20.906843 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:20.966976 kubelet[1753]: E0213 15:31:20.966901 1753 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:20.969617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:20.969758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:30.978072 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Feb 13 15:31:30.990037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:31.107505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:31.111923 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:31.162075 kubelet[1768]: E0213 15:31:31.161995 1768 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:31.164911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:31.165098 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:40.957227 systemd[1]: Started sshd@1-142.132.179.183:22-139.178.89.65:53008.service - OpenSSH per-connection server daemon (139.178.89.65:53008). Feb 13 15:31:41.227949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Feb 13 15:31:41.233983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:41.378812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:41.379148 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:41.435827 kubelet[1788]: E0213 15:31:41.435715 1788 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:41.438376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:41.438518 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:41.949082 sshd[1778]: Accepted publickey for core from 139.178.89.65 port 53008 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:31:41.953282 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:41.971990 systemd-logind[1466]: New session 1 of user core. Feb 13 15:31:41.973921 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:31:41.981800 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:31:41.999414 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:31:42.008964 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:31:42.014839 (systemd)[1798]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:31:42.128706 systemd[1798]: Queued start job for default target default.target. Feb 13 15:31:42.137432 systemd[1798]: Created slice app.slice - User Application Slice. Feb 13 15:31:42.137486 systemd[1798]: Reached target paths.target - Paths. Feb 13 15:31:42.137512 systemd[1798]: Reached target timers.target - Timers. Feb 13 15:31:42.140196 systemd[1798]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:31:42.158627 systemd[1798]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:31:42.158792 systemd[1798]: Reached target sockets.target - Sockets. Feb 13 15:31:42.158809 systemd[1798]: Reached target basic.target - Basic System. Feb 13 15:31:42.158861 systemd[1798]: Reached target default.target - Main User Target. Feb 13 15:31:42.158889 systemd[1798]: Startup finished in 135ms. Feb 13 15:31:42.159135 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:31:42.170666 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:31:42.873652 systemd[1]: Started sshd@2-142.132.179.183:22-139.178.89.65:53014.service - OpenSSH per-connection server daemon (139.178.89.65:53014). Feb 13 15:31:43.857210 sshd[1809]: Accepted publickey for core from 139.178.89.65 port 53014 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:31:43.859767 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:43.866286 systemd-logind[1466]: New session 2 of user core. Feb 13 15:31:43.876732 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:31:44.534130 sshd[1811]: Connection closed by 139.178.89.65 port 53014 Feb 13 15:31:44.535148 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:44.540830 systemd[1]: sshd@2-142.132.179.183:22-139.178.89.65:53014.service: Deactivated successfully. Feb 13 15:31:44.543509 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:31:44.544195 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:31:44.545184 systemd-logind[1466]: Removed session 2. Feb 13 15:31:44.705507 systemd[1]: Started sshd@3-142.132.179.183:22-139.178.89.65:39800.service - OpenSSH per-connection server daemon (139.178.89.65:39800). Feb 13 15:31:45.686390 sshd[1816]: Accepted publickey for core from 139.178.89.65 port 39800 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:31:45.688577 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:45.696054 systemd-logind[1466]: New session 3 of user core. Feb 13 15:31:45.698636 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:31:46.356469 sshd[1818]: Connection closed by 139.178.89.65 port 39800 Feb 13 15:31:46.357239 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:46.364155 systemd[1]: sshd@3-142.132.179.183:22-139.178.89.65:39800.service: Deactivated successfully. Feb 13 15:31:46.366171 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:31:46.367017 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:31:46.368441 systemd-logind[1466]: Removed session 3. Feb 13 15:31:46.529610 systemd[1]: Started sshd@4-142.132.179.183:22-139.178.89.65:39806.service - OpenSSH per-connection server daemon (139.178.89.65:39806). Feb 13 15:31:47.526992 sshd[1823]: Accepted publickey for core from 139.178.89.65 port 39806 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:31:47.529905 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:47.536044 systemd-logind[1466]: New session 4 of user core. Feb 13 15:31:47.546734 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:31:48.211038 sshd[1825]: Connection closed by 139.178.89.65 port 39806 Feb 13 15:31:48.211704 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:48.216088 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:31:48.217462 systemd[1]: sshd@4-142.132.179.183:22-139.178.89.65:39806.service: Deactivated successfully. Feb 13 15:31:48.219761 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:31:48.221159 systemd-logind[1466]: Removed session 4. Feb 13 15:31:48.387864 systemd[1]: Started sshd@5-142.132.179.183:22-139.178.89.65:39814.service - OpenSSH per-connection server daemon (139.178.89.65:39814). Feb 13 15:31:49.365941 sshd[1830]: Accepted publickey for core from 139.178.89.65 port 39814 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:31:49.368206 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:49.373874 systemd-logind[1466]: New session 5 of user core. Feb 13 15:31:49.380771 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:31:49.903164 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:31:49.903600 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:49.919157 sudo[1833]: pam_unix(sudo:session): session closed for user root Feb 13 15:31:50.078438 sshd[1832]: Connection closed by 139.178.89.65 port 39814 Feb 13 15:31:50.079544 sshd-session[1830]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:50.085819 systemd[1]: sshd@5-142.132.179.183:22-139.178.89.65:39814.service: Deactivated successfully. Feb 13 15:31:50.088933 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:31:50.090921 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:31:50.093052 systemd-logind[1466]: Removed session 5. Feb 13 15:31:50.253693 systemd[1]: Started sshd@6-142.132.179.183:22-139.178.89.65:39820.service - OpenSSH per-connection server daemon (139.178.89.65:39820). Feb 13 15:31:51.241544 sshd[1838]: Accepted publickey for core from 139.178.89.65 port 39820 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:31:51.244809 sshd-session[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:51.248994 systemd-logind[1466]: New session 6 of user core. Feb 13 15:31:51.259934 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:31:51.477895 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Feb 13 15:31:51.485626 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:31:51.621662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:31:51.623205 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:31:51.699854 kubelet[1849]: E0213 15:31:51.699669 1849 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:31:51.703690 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:31:51.703945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:31:51.765397 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:31:51.766108 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:51.770861 sudo[1858]: pam_unix(sudo:session): session closed for user root Feb 13 15:31:51.777107 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:31:51.777411 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:51.796079 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:31:51.831903 augenrules[1880]: No rules Feb 13 15:31:51.833856 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:31:51.834172 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:31:51.836652 sudo[1857]: pam_unix(sudo:session): session closed for user root Feb 13 15:31:51.999382 sshd[1840]: Connection closed by 139.178.89.65 port 39820 Feb 13 15:31:51.999921 sshd-session[1838]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:52.006709 systemd[1]: sshd@6-142.132.179.183:22-139.178.89.65:39820.service: Deactivated successfully. Feb 13 15:31:52.010208 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:31:52.011793 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:31:52.013070 systemd-logind[1466]: Removed session 6. Feb 13 15:31:52.180891 systemd[1]: Started sshd@7-142.132.179.183:22-139.178.89.65:39824.service - OpenSSH per-connection server daemon (139.178.89.65:39824). Feb 13 15:31:53.178174 sshd[1888]: Accepted publickey for core from 139.178.89.65 port 39824 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:31:53.180570 sshd-session[1888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:53.187412 systemd-logind[1466]: New session 7 of user core. Feb 13 15:31:53.193677 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:31:53.707375 sudo[1891]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:31:53.707699 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:31:54.026402 (dockerd)[1910]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:31:54.027135 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:31:54.259465 dockerd[1910]: time="2025-02-13T15:31:54.258717511Z" level=info msg="Starting up" Feb 13 15:31:54.342524 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2149334410-merged.mount: Deactivated successfully. Feb 13 15:31:54.371966 dockerd[1910]: time="2025-02-13T15:31:54.371884388Z" level=info msg="Loading containers: start." Feb 13 15:31:54.536405 kernel: Initializing XFRM netlink socket Feb 13 15:31:54.656122 systemd-networkd[1390]: docker0: Link UP Feb 13 15:31:54.688811 dockerd[1910]: time="2025-02-13T15:31:54.688625360Z" level=info msg="Loading containers: done." Feb 13 15:31:54.714203 dockerd[1910]: time="2025-02-13T15:31:54.714112113Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:31:54.714436 dockerd[1910]: time="2025-02-13T15:31:54.714288396Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:31:54.714698 dockerd[1910]: time="2025-02-13T15:31:54.714670124Z" level=info msg="Daemon has completed initialization" Feb 13 15:31:54.766638 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:31:54.768284 dockerd[1910]: time="2025-02-13T15:31:54.766668770Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:31:55.997528 containerd[1492]: time="2025-02-13T15:31:55.997425785Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:31:56.630284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1715781347.mount: Deactivated successfully. Feb 13 15:31:57.596748 containerd[1492]: time="2025-02-13T15:31:57.596659507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:57.598771 containerd[1492]: time="2025-02-13T15:31:57.598630304Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205953" Feb 13 15:31:57.601033 containerd[1492]: time="2025-02-13T15:31:57.599837767Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:57.604329 containerd[1492]: time="2025-02-13T15:31:57.604250690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:57.605555 containerd[1492]: time="2025-02-13T15:31:57.605513153Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 1.608028727s" Feb 13 15:31:57.607323 containerd[1492]: time="2025-02-13T15:31:57.605561554Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:31:57.631836 containerd[1492]: time="2025-02-13T15:31:57.631684524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:31:58.932836 containerd[1492]: time="2025-02-13T15:31:58.932775017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:58.934828 containerd[1492]: time="2025-02-13T15:31:58.934335926Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383111" Feb 13 15:31:58.937203 containerd[1492]: time="2025-02-13T15:31:58.935372705Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:58.943609 containerd[1492]: time="2025-02-13T15:31:58.943535535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:58.944508 containerd[1492]: time="2025-02-13T15:31:58.944464552Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 1.311992492s" Feb 13 15:31:58.944649 containerd[1492]: time="2025-02-13T15:31:58.944630715Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:31:58.970455 containerd[1492]: time="2025-02-13T15:31:58.970397787Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:31:59.988203 containerd[1492]: time="2025-02-13T15:31:59.986653615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:59.988203 containerd[1492]: time="2025-02-13T15:31:59.988139562Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15767000" Feb 13 15:31:59.989834 containerd[1492]: time="2025-02-13T15:31:59.989457345Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:59.992622 containerd[1492]: time="2025-02-13T15:31:59.992580721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:31:59.993799 containerd[1492]: time="2025-02-13T15:31:59.993759702Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.023118071s" Feb 13 15:31:59.994019 containerd[1492]: time="2025-02-13T15:31:59.993907425Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:32:00.021342 containerd[1492]: time="2025-02-13T15:32:00.021298427Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:32:01.055112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount629938381.mount: Deactivated successfully. Feb 13 15:32:01.383508 containerd[1492]: time="2025-02-13T15:32:01.383369029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:01.385774 containerd[1492]: time="2025-02-13T15:32:01.385700109Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273401" Feb 13 15:32:01.387176 containerd[1492]: time="2025-02-13T15:32:01.387096412Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:01.391457 containerd[1492]: time="2025-02-13T15:32:01.390324508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:01.391457 containerd[1492]: time="2025-02-13T15:32:01.391258284Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.369673451s" Feb 13 15:32:01.391457 containerd[1492]: time="2025-02-13T15:32:01.391300605Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:32:01.421820 containerd[1492]: time="2025-02-13T15:32:01.421779287Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:32:01.727900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Feb 13 15:32:01.736623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:32:01.859148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:01.876042 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:32:01.942297 kubelet[2199]: E0213 15:32:01.942223 2199 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:32:01.946154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:32:01.946462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:32:02.013200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459142368.mount: Deactivated successfully. Feb 13 15:32:02.641014 containerd[1492]: time="2025-02-13T15:32:02.640941341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:02.643078 containerd[1492]: time="2025-02-13T15:32:02.642976015Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Feb 13 15:32:02.645153 containerd[1492]: time="2025-02-13T15:32:02.645075090Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:02.648639 containerd[1492]: time="2025-02-13T15:32:02.648575269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:02.649894 containerd[1492]: time="2025-02-13T15:32:02.649836730Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.227801439s" Feb 13 15:32:02.649894 containerd[1492]: time="2025-02-13T15:32:02.649880171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:32:02.675750 containerd[1492]: time="2025-02-13T15:32:02.675708044Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:32:03.230648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644296963.mount: Deactivated successfully. Feb 13 15:32:03.239397 containerd[1492]: time="2025-02-13T15:32:03.238418951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:03.240674 containerd[1492]: time="2025-02-13T15:32:03.240618147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Feb 13 15:32:03.241987 containerd[1492]: time="2025-02-13T15:32:03.241946888Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:03.245610 containerd[1492]: time="2025-02-13T15:32:03.245523387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:03.246209 containerd[1492]: time="2025-02-13T15:32:03.246177878Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 570.429394ms" Feb 13 15:32:03.246506 containerd[1492]: time="2025-02-13T15:32:03.246302840Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:32:03.269382 containerd[1492]: time="2025-02-13T15:32:03.268949251Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:32:03.846407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458512802.mount: Deactivated successfully. Feb 13 15:32:05.207511 containerd[1492]: time="2025-02-13T15:32:05.206542256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:05.209212 containerd[1492]: time="2025-02-13T15:32:05.208647569Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866" Feb 13 15:32:05.210559 containerd[1492]: time="2025-02-13T15:32:05.210509639Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:05.214578 containerd[1492]: time="2025-02-13T15:32:05.214528782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:05.216002 containerd[1492]: time="2025-02-13T15:32:05.215949524Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.946956832s" Feb 13 15:32:05.216002 containerd[1492]: time="2025-02-13T15:32:05.215997165Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:32:10.506320 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:10.523469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:32:10.536070 systemd[1]: Reloading requested from client PID 2377 ('systemctl') (unit session-7.scope)... Feb 13 15:32:10.536108 systemd[1]: Reloading... Feb 13 15:32:10.674390 zram_generator::config[2432]: No configuration found. Feb 13 15:32:10.770535 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:32:10.839972 systemd[1]: Reloading finished in 303 ms. Feb 13 15:32:10.897855 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:32:10.898339 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:32:10.899231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:10.906038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:32:11.038202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:11.052794 (kubelet)[2464]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:32:11.114255 kubelet[2464]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:32:11.114255 kubelet[2464]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:32:11.114255 kubelet[2464]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:32:11.114679 kubelet[2464]: I0213 15:32:11.114299 2464 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:32:12.020402 kubelet[2464]: I0213 15:32:12.019026 2464 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:32:12.020402 kubelet[2464]: I0213 15:32:12.019077 2464 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:32:12.020402 kubelet[2464]: I0213 15:32:12.019504 2464 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:32:12.045606 kubelet[2464]: I0213 15:32:12.045305 2464 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:32:12.045780 kubelet[2464]: E0213 15:32:12.045637 2464 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://142.132.179.183:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:12.056167 kubelet[2464]: I0213 15:32:12.055699 2464 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:32:12.057417 kubelet[2464]: I0213 15:32:12.057383 2464 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:32:12.057964 kubelet[2464]: I0213 15:32:12.057742 2464 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:32:12.057964 kubelet[2464]: I0213 15:32:12.057771 2464 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:32:12.057964 kubelet[2464]: I0213 15:32:12.057781 2464 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:32:12.059978 kubelet[2464]: I0213 15:32:12.059875 2464 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:32:12.063044 kubelet[2464]: I0213 15:32:12.062778 2464 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:32:12.063044 kubelet[2464]: I0213 15:32:12.063025 2464 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:32:12.063044 kubelet[2464]: I0213 15:32:12.063061 2464 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:32:12.063287 kubelet[2464]: I0213 15:32:12.063076 2464 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:32:12.065979 kubelet[2464]: W0213 15:32:12.065766 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://142.132.179.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-6-ce8ef0549e&limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:12.065979 kubelet[2464]: E0213 15:32:12.065837 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://142.132.179.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-6-ce8ef0549e&limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:12.066460 kubelet[2464]: W0213 15:32:12.066395 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://142.132.179.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:12.066460 kubelet[2464]: E0213 15:32:12.066441 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://142.132.179.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:12.066570 kubelet[2464]: I0213 15:32:12.066521 2464 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:32:12.067089 kubelet[2464]: I0213 15:32:12.067027 2464 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:32:12.069201 kubelet[2464]: W0213 15:32:12.069106 2464 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:32:12.071382 kubelet[2464]: I0213 15:32:12.070885 2464 server.go:1256] "Started kubelet" Feb 13 15:32:12.077677 kubelet[2464]: I0213 15:32:12.077551 2464 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:32:12.080262 kubelet[2464]: E0213 15:32:12.080216 2464 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://142.132.179.183:6443/api/v1/namespaces/default/events\": dial tcp 142.132.179.183:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-1-6-ce8ef0549e.1823ce51dbd2ffd5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-1-6-ce8ef0549e,UID:ci-4186-1-1-6-ce8ef0549e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-1-6-ce8ef0549e,},FirstTimestamp:2025-02-13 15:32:12.070854613 +0000 UTC m=+1.010712051,LastTimestamp:2025-02-13 15:32:12.070854613 +0000 UTC m=+1.010712051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-1-6-ce8ef0549e,}" Feb 13 15:32:12.086431 kubelet[2464]: I0213 15:32:12.085981 2464 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:32:12.087716 kubelet[2464]: I0213 15:32:12.087686 2464 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:32:12.088099 kubelet[2464]: I0213 15:32:12.088080 2464 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:32:12.090397 kubelet[2464]: I0213 15:32:12.090365 2464 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:32:12.090774 kubelet[2464]: I0213 15:32:12.090758 2464 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:32:12.091449 kubelet[2464]: I0213 15:32:12.091133 2464 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:32:12.091772 kubelet[2464]: I0213 15:32:12.091743 2464 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:32:12.091956 kubelet[2464]: E0213 15:32:12.091939 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.179.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-6-ce8ef0549e?timeout=10s\": dial tcp 142.132.179.183:6443: connect: connection refused" interval="200ms" Feb 13 15:32:12.093287 kubelet[2464]: W0213 15:32:12.093221 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://142.132.179.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:12.093435 kubelet[2464]: E0213 15:32:12.093420 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://142.132.179.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:12.093651 kubelet[2464]: I0213 15:32:12.093632 2464 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:32:12.093841 kubelet[2464]: I0213 15:32:12.093815 2464 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:32:12.097195 kubelet[2464]: I0213 15:32:12.097106 2464 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:32:12.108000 kubelet[2464]: E0213 15:32:12.107968 2464 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:32:12.112596 kubelet[2464]: I0213 15:32:12.112565 2464 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:32:12.114230 kubelet[2464]: I0213 15:32:12.114207 2464 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:32:12.114391 kubelet[2464]: I0213 15:32:12.114377 2464 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:32:12.114692 kubelet[2464]: I0213 15:32:12.114675 2464 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:32:12.114805 kubelet[2464]: E0213 15:32:12.114793 2464 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:32:12.124268 kubelet[2464]: W0213 15:32:12.124204 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://142.132.179.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:12.124615 kubelet[2464]: E0213 15:32:12.124249 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://142.132.179.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:12.124949 kubelet[2464]: I0213 15:32:12.124930 2464 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:32:12.124949 kubelet[2464]: I0213 15:32:12.124949 2464 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:32:12.125026 kubelet[2464]: I0213 15:32:12.124965 2464 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:32:12.127917 kubelet[2464]: I0213 15:32:12.127872 2464 policy_none.go:49] "None policy: Start" Feb 13 15:32:12.128721 kubelet[2464]: I0213 15:32:12.128699 2464 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:32:12.129222 kubelet[2464]: I0213 15:32:12.128856 2464 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:32:12.137045 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:32:12.155316 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:32:12.160741 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:32:12.173038 kubelet[2464]: I0213 15:32:12.172968 2464 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:32:12.173371 kubelet[2464]: I0213 15:32:12.173335 2464 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:32:12.178935 kubelet[2464]: E0213 15:32:12.178379 2464 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-1-6-ce8ef0549e\" not found" Feb 13 15:32:12.192259 kubelet[2464]: I0213 15:32:12.192180 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.193080 kubelet[2464]: E0213 15:32:12.193055 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://142.132.179.183:6443/api/v1/nodes\": dial tcp 142.132.179.183:6443: connect: connection refused" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.215644 kubelet[2464]: I0213 15:32:12.215394 2464 topology_manager.go:215] "Topology Admit Handler" podUID="58517a95d37ac0d4c544954d18cec0f6" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.217908 kubelet[2464]: I0213 15:32:12.217602 2464 topology_manager.go:215] "Topology Admit Handler" podUID="936a04b7b2da81ede2129a886084f054" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.219679 kubelet[2464]: I0213 15:32:12.219652 2464 topology_manager.go:215] "Topology Admit Handler" podUID="113024124467c3d4afa32dd72de5c3a5" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.229320 systemd[1]: Created slice kubepods-burstable-pod58517a95d37ac0d4c544954d18cec0f6.slice - libcontainer container kubepods-burstable-pod58517a95d37ac0d4c544954d18cec0f6.slice. Feb 13 15:32:12.263230 systemd[1]: Created slice kubepods-burstable-pod936a04b7b2da81ede2129a886084f054.slice - libcontainer container kubepods-burstable-pod936a04b7b2da81ede2129a886084f054.slice. Feb 13 15:32:12.274715 systemd[1]: Created slice kubepods-burstable-pod113024124467c3d4afa32dd72de5c3a5.slice - libcontainer container kubepods-burstable-pod113024124467c3d4afa32dd72de5c3a5.slice. Feb 13 15:32:12.293756 kubelet[2464]: E0213 15:32:12.293079 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.179.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-6-ce8ef0549e?timeout=10s\": dial tcp 142.132.179.183:6443: connect: connection refused" interval="400ms" Feb 13 15:32:12.293756 kubelet[2464]: I0213 15:32:12.293689 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58517a95d37ac0d4c544954d18cec0f6-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-1-6-ce8ef0549e\" (UID: \"58517a95d37ac0d4c544954d18cec0f6\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.294327 kubelet[2464]: I0213 15:32:12.294293 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58517a95d37ac0d4c544954d18cec0f6-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-1-6-ce8ef0549e\" (UID: \"58517a95d37ac0d4c544954d18cec0f6\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.294892 kubelet[2464]: I0213 15:32:12.294532 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58517a95d37ac0d4c544954d18cec0f6-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-1-6-ce8ef0549e\" (UID: \"58517a95d37ac0d4c544954d18cec0f6\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.294892 kubelet[2464]: I0213 15:32:12.294590 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58517a95d37ac0d4c544954d18cec0f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-1-6-ce8ef0549e\" (UID: \"58517a95d37ac0d4c544954d18cec0f6\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.294892 kubelet[2464]: I0213 15:32:12.294625 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/936a04b7b2da81ede2129a886084f054-kubeconfig\") pod \"kube-scheduler-ci-4186-1-1-6-ce8ef0549e\" (UID: \"936a04b7b2da81ede2129a886084f054\") " pod="kube-system/kube-scheduler-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.294892 kubelet[2464]: I0213 15:32:12.294673 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/113024124467c3d4afa32dd72de5c3a5-ca-certs\") pod \"kube-apiserver-ci-4186-1-1-6-ce8ef0549e\" (UID: \"113024124467c3d4afa32dd72de5c3a5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.294892 kubelet[2464]: I0213 15:32:12.294712 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58517a95d37ac0d4c544954d18cec0f6-ca-certs\") pod \"kube-controller-manager-ci-4186-1-1-6-ce8ef0549e\" (UID: \"58517a95d37ac0d4c544954d18cec0f6\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.295130 kubelet[2464]: I0213 15:32:12.294749 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/113024124467c3d4afa32dd72de5c3a5-k8s-certs\") pod \"kube-apiserver-ci-4186-1-1-6-ce8ef0549e\" (UID: \"113024124467c3d4afa32dd72de5c3a5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.295130 kubelet[2464]: I0213 15:32:12.294786 2464 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/113024124467c3d4afa32dd72de5c3a5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-1-6-ce8ef0549e\" (UID: \"113024124467c3d4afa32dd72de5c3a5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.399343 kubelet[2464]: I0213 15:32:12.398835 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.399343 kubelet[2464]: E0213 15:32:12.399309 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://142.132.179.183:6443/api/v1/nodes\": dial tcp 142.132.179.183:6443: connect: connection refused" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.559745 containerd[1492]: time="2025-02-13T15:32:12.559596664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-1-6-ce8ef0549e,Uid:58517a95d37ac0d4c544954d18cec0f6,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:12.569393 containerd[1492]: time="2025-02-13T15:32:12.568813749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-1-6-ce8ef0549e,Uid:936a04b7b2da81ede2129a886084f054,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:12.581496 containerd[1492]: time="2025-02-13T15:32:12.581005755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-1-6-ce8ef0549e,Uid:113024124467c3d4afa32dd72de5c3a5,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:12.694710 kubelet[2464]: E0213 15:32:12.694672 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.179.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-6-ce8ef0549e?timeout=10s\": dial tcp 142.132.179.183:6443: connect: connection refused" interval="800ms" Feb 13 15:32:12.801977 kubelet[2464]: I0213 15:32:12.801943 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.802374 kubelet[2464]: E0213 15:32:12.802338 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://142.132.179.183:6443/api/v1/nodes\": dial tcp 142.132.179.183:6443: connect: connection refused" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:12.973848 kubelet[2464]: W0213 15:32:12.973679 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://142.132.179.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:12.973848 kubelet[2464]: E0213 15:32:12.973734 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://142.132.179.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:13.083312 kubelet[2464]: W0213 15:32:13.083230 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://142.132.179.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:13.083312 kubelet[2464]: E0213 15:32:13.083285 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://142.132.179.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:13.099963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3495056687.mount: Deactivated successfully. Feb 13 15:32:13.109667 containerd[1492]: time="2025-02-13T15:32:13.109593840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:32:13.112855 containerd[1492]: time="2025-02-13T15:32:13.112783523Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 15:32:13.116808 containerd[1492]: time="2025-02-13T15:32:13.116748216Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:32:13.118539 containerd[1492]: time="2025-02-13T15:32:13.118314517Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:32:13.121389 containerd[1492]: time="2025-02-13T15:32:13.120539146Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:32:13.121389 containerd[1492]: time="2025-02-13T15:32:13.120659348Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:32:13.121587 containerd[1492]: time="2025-02-13T15:32:13.121556000Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:32:13.122776 containerd[1492]: time="2025-02-13T15:32:13.122304170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:32:13.123449 containerd[1492]: time="2025-02-13T15:32:13.123408225Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.294989ms" Feb 13 15:32:13.128830 containerd[1492]: time="2025-02-13T15:32:13.128727816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.791985ms" Feb 13 15:32:13.130769 containerd[1492]: time="2025-02-13T15:32:13.130691562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.999644ms" Feb 13 15:32:13.135159 kubelet[2464]: W0213 15:32:13.134913 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://142.132.179.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-6-ce8ef0549e&limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:13.135159 kubelet[2464]: E0213 15:32:13.135031 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://142.132.179.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-6-ce8ef0549e&limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:13.141408 kubelet[2464]: W0213 15:32:13.141285 2464 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://142.132.179.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:13.141581 kubelet[2464]: E0213 15:32:13.141568 2464 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://142.132.179.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 142.132.179.183:6443: connect: connection refused Feb 13 15:32:13.278710 containerd[1492]: time="2025-02-13T15:32:13.278239531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:13.278888 containerd[1492]: time="2025-02-13T15:32:13.278700697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:13.278888 containerd[1492]: time="2025-02-13T15:32:13.278774458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:13.278888 containerd[1492]: time="2025-02-13T15:32:13.278791658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:13.279214 containerd[1492]: time="2025-02-13T15:32:13.279002461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:13.280372 containerd[1492]: time="2025-02-13T15:32:13.279721871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:13.280372 containerd[1492]: time="2025-02-13T15:32:13.280235237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:13.281029 containerd[1492]: time="2025-02-13T15:32:13.280881086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:13.287430 containerd[1492]: time="2025-02-13T15:32:13.287226091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:13.288414 containerd[1492]: time="2025-02-13T15:32:13.288109023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:13.288414 containerd[1492]: time="2025-02-13T15:32:13.288202504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:13.288867 containerd[1492]: time="2025-02-13T15:32:13.288700190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:13.313600 systemd[1]: Started cri-containerd-244162906e4e8aa6c54b0df190c31855e58b3f705f0e6d73eb16fc50bf329248.scope - libcontainer container 244162906e4e8aa6c54b0df190c31855e58b3f705f0e6d73eb16fc50bf329248. Feb 13 15:32:13.315873 systemd[1]: Started cri-containerd-81cb8fddaf3286e3ba0fa1afa5a5ac4d75a9df11b51ca7bbf927313f132e504b.scope - libcontainer container 81cb8fddaf3286e3ba0fa1afa5a5ac4d75a9df11b51ca7bbf927313f132e504b. Feb 13 15:32:13.318566 systemd[1]: Started cri-containerd-e058b485bd56598836795588cc3d2d65c45fc683eab26ac2df8b05d7890f41d1.scope - libcontainer container e058b485bd56598836795588cc3d2d65c45fc683eab26ac2df8b05d7890f41d1. Feb 13 15:32:13.363431 containerd[1492]: time="2025-02-13T15:32:13.363147104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-1-6-ce8ef0549e,Uid:58517a95d37ac0d4c544954d18cec0f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"81cb8fddaf3286e3ba0fa1afa5a5ac4d75a9df11b51ca7bbf927313f132e504b\"" Feb 13 15:32:13.380926 containerd[1492]: time="2025-02-13T15:32:13.380296973Z" level=info msg="CreateContainer within sandbox \"81cb8fddaf3286e3ba0fa1afa5a5ac4d75a9df11b51ca7bbf927313f132e504b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:32:13.385023 containerd[1492]: time="2025-02-13T15:32:13.384943915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-1-6-ce8ef0549e,Uid:113024124467c3d4afa32dd72de5c3a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"244162906e4e8aa6c54b0df190c31855e58b3f705f0e6d73eb16fc50bf329248\"" Feb 13 15:32:13.394530 containerd[1492]: time="2025-02-13T15:32:13.394291439Z" level=info msg="CreateContainer within sandbox \"244162906e4e8aa6c54b0df190c31855e58b3f705f0e6d73eb16fc50bf329248\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:32:13.403827 containerd[1492]: time="2025-02-13T15:32:13.403667805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-1-6-ce8ef0549e,Uid:936a04b7b2da81ede2129a886084f054,Namespace:kube-system,Attempt:0,} returns sandbox id \"e058b485bd56598836795588cc3d2d65c45fc683eab26ac2df8b05d7890f41d1\"" Feb 13 15:32:13.410430 containerd[1492]: time="2025-02-13T15:32:13.410051690Z" level=info msg="CreateContainer within sandbox \"e058b485bd56598836795588cc3d2d65c45fc683eab26ac2df8b05d7890f41d1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:32:13.419943 containerd[1492]: time="2025-02-13T15:32:13.419892461Z" level=info msg="CreateContainer within sandbox \"81cb8fddaf3286e3ba0fa1afa5a5ac4d75a9df11b51ca7bbf927313f132e504b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a717110b2792408e97d72087ff22998f466acec1488e5fb5f2f826feadb2e54e\"" Feb 13 15:32:13.421151 containerd[1492]: time="2025-02-13T15:32:13.421023956Z" level=info msg="StartContainer for \"a717110b2792408e97d72087ff22998f466acec1488e5fb5f2f826feadb2e54e\"" Feb 13 15:32:13.422395 containerd[1492]: time="2025-02-13T15:32:13.422287693Z" level=info msg="CreateContainer within sandbox \"244162906e4e8aa6c54b0df190c31855e58b3f705f0e6d73eb16fc50bf329248\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b58e6e27910c7fe980564749df5c15b6c04ac4c4a1853a6a92cfe7b8cea31416\"" Feb 13 15:32:13.423196 containerd[1492]: time="2025-02-13T15:32:13.423165345Z" level=info msg="StartContainer for \"b58e6e27910c7fe980564749df5c15b6c04ac4c4a1853a6a92cfe7b8cea31416\"" Feb 13 15:32:13.439202 containerd[1492]: time="2025-02-13T15:32:13.437940862Z" level=info msg="CreateContainer within sandbox \"e058b485bd56598836795588cc3d2d65c45fc683eab26ac2df8b05d7890f41d1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6a33dd91079d57270127e984c6fb450f04c889a5ba08e97a9e0e25f22a187177\"" Feb 13 15:32:13.439779 containerd[1492]: time="2025-02-13T15:32:13.439741566Z" level=info msg="StartContainer for \"6a33dd91079d57270127e984c6fb450f04c889a5ba08e97a9e0e25f22a187177\"" Feb 13 15:32:13.460582 systemd[1]: Started cri-containerd-a717110b2792408e97d72087ff22998f466acec1488e5fb5f2f826feadb2e54e.scope - libcontainer container a717110b2792408e97d72087ff22998f466acec1488e5fb5f2f826feadb2e54e. Feb 13 15:32:13.470577 systemd[1]: Started cri-containerd-b58e6e27910c7fe980564749df5c15b6c04ac4c4a1853a6a92cfe7b8cea31416.scope - libcontainer container b58e6e27910c7fe980564749df5c15b6c04ac4c4a1853a6a92cfe7b8cea31416. Feb 13 15:32:13.491896 systemd[1]: Started cri-containerd-6a33dd91079d57270127e984c6fb450f04c889a5ba08e97a9e0e25f22a187177.scope - libcontainer container 6a33dd91079d57270127e984c6fb450f04c889a5ba08e97a9e0e25f22a187177. Feb 13 15:32:13.495843 kubelet[2464]: E0213 15:32:13.495790 2464 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://142.132.179.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-6-ce8ef0549e?timeout=10s\": dial tcp 142.132.179.183:6443: connect: connection refused" interval="1.6s" Feb 13 15:32:13.546332 containerd[1492]: time="2025-02-13T15:32:13.545431696Z" level=info msg="StartContainer for \"b58e6e27910c7fe980564749df5c15b6c04ac4c4a1853a6a92cfe7b8cea31416\" returns successfully" Feb 13 15:32:13.568106 containerd[1492]: time="2025-02-13T15:32:13.568055638Z" level=info msg="StartContainer for \"a717110b2792408e97d72087ff22998f466acec1488e5fb5f2f826feadb2e54e\" returns successfully" Feb 13 15:32:13.592037 containerd[1492]: time="2025-02-13T15:32:13.591981597Z" level=info msg="StartContainer for \"6a33dd91079d57270127e984c6fb450f04c889a5ba08e97a9e0e25f22a187177\" returns successfully" Feb 13 15:32:13.606195 kubelet[2464]: I0213 15:32:13.605963 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:13.607016 kubelet[2464]: E0213 15:32:13.606865 2464 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://142.132.179.183:6443/api/v1/nodes\": dial tcp 142.132.179.183:6443: connect: connection refused" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:15.209701 kubelet[2464]: I0213 15:32:15.209668 2464 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:15.825815 kubelet[2464]: E0213 15:32:15.825766 2464 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-1-1-6-ce8ef0549e\" not found" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:15.903929 kubelet[2464]: I0213 15:32:15.903892 2464 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:15.947698 kubelet[2464]: E0213 15:32:15.947639 2464 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-1-1-6-ce8ef0549e.1823ce51dbd2ffd5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-1-6-ce8ef0549e,UID:ci-4186-1-1-6-ce8ef0549e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-1-6-ce8ef0549e,},FirstTimestamp:2025-02-13 15:32:12.070854613 +0000 UTC m=+1.010712051,LastTimestamp:2025-02-13 15:32:12.070854613 +0000 UTC m=+1.010712051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-1-6-ce8ef0549e,}" Feb 13 15:32:16.069383 kubelet[2464]: I0213 15:32:16.069013 2464 apiserver.go:52] "Watching apiserver" Feb 13 15:32:16.092255 kubelet[2464]: I0213 15:32:16.091900 2464 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:32:18.756490 systemd[1]: Reloading requested from client PID 2737 ('systemctl') (unit session-7.scope)... Feb 13 15:32:18.756510 systemd[1]: Reloading... Feb 13 15:32:18.852394 zram_generator::config[2773]: No configuration found. Feb 13 15:32:18.965720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:32:19.050806 systemd[1]: Reloading finished in 293 ms. Feb 13 15:32:19.093147 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:32:19.094157 kubelet[2464]: I0213 15:32:19.093032 2464 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:32:19.110179 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:32:19.111018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:19.111100 systemd[1]: kubelet.service: Consumed 1.490s CPU time, 113.6M memory peak, 0B memory swap peak. Feb 13 15:32:19.119736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:32:19.249705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:32:19.261764 (kubelet)[2821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:32:19.334789 kubelet[2821]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:32:19.334789 kubelet[2821]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:32:19.334789 kubelet[2821]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:32:19.334789 kubelet[2821]: I0213 15:32:19.333994 2821 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:32:19.341503 kubelet[2821]: I0213 15:32:19.341343 2821 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:32:19.341503 kubelet[2821]: I0213 15:32:19.341397 2821 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:32:19.343781 kubelet[2821]: I0213 15:32:19.343737 2821 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:32:19.350439 kubelet[2821]: I0213 15:32:19.350387 2821 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:32:19.356257 kubelet[2821]: I0213 15:32:19.356210 2821 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:32:19.365176 kubelet[2821]: I0213 15:32:19.365134 2821 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:32:19.365784 kubelet[2821]: I0213 15:32:19.365748 2821 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:32:19.365989 kubelet[2821]: I0213 15:32:19.365963 2821 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:32:19.365989 kubelet[2821]: I0213 15:32:19.365988 2821 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:32:19.366098 kubelet[2821]: I0213 15:32:19.365997 2821 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:32:19.366098 kubelet[2821]: I0213 15:32:19.366029 2821 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:32:19.366776 kubelet[2821]: I0213 15:32:19.366151 2821 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:32:19.366776 kubelet[2821]: I0213 15:32:19.366173 2821 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:32:19.366776 kubelet[2821]: I0213 15:32:19.366666 2821 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:32:19.366776 kubelet[2821]: I0213 15:32:19.366692 2821 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:32:19.371175 kubelet[2821]: I0213 15:32:19.370966 2821 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:32:19.371323 kubelet[2821]: I0213 15:32:19.371187 2821 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:32:19.371712 kubelet[2821]: I0213 15:32:19.371659 2821 server.go:1256] "Started kubelet" Feb 13 15:32:19.377611 kubelet[2821]: I0213 15:32:19.377575 2821 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:32:19.387409 kubelet[2821]: I0213 15:32:19.386835 2821 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:32:19.387748 kubelet[2821]: I0213 15:32:19.387712 2821 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:32:19.389169 kubelet[2821]: I0213 15:32:19.388843 2821 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:32:19.389169 kubelet[2821]: I0213 15:32:19.389066 2821 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:32:19.391580 kubelet[2821]: I0213 15:32:19.390912 2821 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:32:19.402425 kubelet[2821]: I0213 15:32:19.400622 2821 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:32:19.402425 kubelet[2821]: I0213 15:32:19.400796 2821 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:32:19.419046 kubelet[2821]: I0213 15:32:19.419000 2821 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:32:19.419865 kubelet[2821]: I0213 15:32:19.419806 2821 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:32:19.421263 kubelet[2821]: I0213 15:32:19.421222 2821 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:32:19.421263 kubelet[2821]: I0213 15:32:19.421265 2821 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:32:19.421445 kubelet[2821]: I0213 15:32:19.421303 2821 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:32:19.421445 kubelet[2821]: E0213 15:32:19.421396 2821 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:32:19.436124 kubelet[2821]: I0213 15:32:19.436086 2821 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:32:19.436376 kubelet[2821]: I0213 15:32:19.436300 2821 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:32:19.448755 kubelet[2821]: E0213 15:32:19.448720 2821 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:32:19.503750 kubelet[2821]: I0213 15:32:19.503681 2821 kubelet_node_status.go:73] "Attempting to register node" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.517865 kubelet[2821]: I0213 15:32:19.516943 2821 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:32:19.517865 kubelet[2821]: I0213 15:32:19.516968 2821 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:32:19.517865 kubelet[2821]: I0213 15:32:19.516991 2821 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:32:19.517865 kubelet[2821]: I0213 15:32:19.517292 2821 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:32:19.517865 kubelet[2821]: I0213 15:32:19.517321 2821 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:32:19.517865 kubelet[2821]: I0213 15:32:19.517330 2821 policy_none.go:49] "None policy: Start" Feb 13 15:32:19.520037 kubelet[2821]: I0213 15:32:19.519979 2821 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:32:19.521023 kubelet[2821]: I0213 15:32:19.520993 2821 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:32:19.521296 kubelet[2821]: I0213 15:32:19.521256 2821 state_mem.go:75] "Updated machine memory state" Feb 13 15:32:19.521810 kubelet[2821]: E0213 15:32:19.521764 2821 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:32:19.526617 kubelet[2821]: I0213 15:32:19.526178 2821 kubelet_node_status.go:112] "Node was previously registered" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.526617 kubelet[2821]: I0213 15:32:19.526305 2821 kubelet_node_status.go:76] "Successfully registered node" node="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.545392 kubelet[2821]: I0213 15:32:19.543743 2821 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:32:19.545392 kubelet[2821]: I0213 15:32:19.543994 2821 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:32:19.722973 kubelet[2821]: I0213 15:32:19.722437 2821 topology_manager.go:215] "Topology Admit Handler" podUID="113024124467c3d4afa32dd72de5c3a5" podNamespace="kube-system" podName="kube-apiserver-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.723359 kubelet[2821]: I0213 15:32:19.723250 2821 topology_manager.go:215] "Topology Admit Handler" podUID="58517a95d37ac0d4c544954d18cec0f6" podNamespace="kube-system" podName="kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.725383 kubelet[2821]: I0213 15:32:19.723525 2821 topology_manager.go:215] "Topology Admit Handler" podUID="936a04b7b2da81ede2129a886084f054" podNamespace="kube-system" podName="kube-scheduler-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.741716 kubelet[2821]: E0213 15:32:19.740608 2821 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186-1-1-6-ce8ef0549e\" already exists" pod="kube-system/kube-scheduler-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.802452 kubelet[2821]: I0213 15:32:19.802365 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58517a95d37ac0d4c544954d18cec0f6-ca-certs\") pod \"kube-controller-manager-ci-4186-1-1-6-ce8ef0549e\" (UID: \"58517a95d37ac0d4c544954d18cec0f6\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.802452 kubelet[2821]: I0213 15:32:19.802427 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58517a95d37ac0d4c544954d18cec0f6-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-1-6-ce8ef0549e\" (UID: \"58517a95d37ac0d4c544954d18cec0f6\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.802452 kubelet[2821]: I0213 15:32:19.802465 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58517a95d37ac0d4c544954d18cec0f6-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-1-6-ce8ef0549e\" (UID: \"58517a95d37ac0d4c544954d18cec0f6\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.802657 kubelet[2821]: I0213 15:32:19.802500 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58517a95d37ac0d4c544954d18cec0f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-1-6-ce8ef0549e\" (UID: \"58517a95d37ac0d4c544954d18cec0f6\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.802657 kubelet[2821]: I0213 15:32:19.802526 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/936a04b7b2da81ede2129a886084f054-kubeconfig\") pod \"kube-scheduler-ci-4186-1-1-6-ce8ef0549e\" (UID: \"936a04b7b2da81ede2129a886084f054\") " pod="kube-system/kube-scheduler-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.802657 kubelet[2821]: I0213 15:32:19.802563 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/113024124467c3d4afa32dd72de5c3a5-k8s-certs\") pod \"kube-apiserver-ci-4186-1-1-6-ce8ef0549e\" (UID: \"113024124467c3d4afa32dd72de5c3a5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.802657 kubelet[2821]: I0213 15:32:19.802593 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/113024124467c3d4afa32dd72de5c3a5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-1-6-ce8ef0549e\" (UID: \"113024124467c3d4afa32dd72de5c3a5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.802657 kubelet[2821]: I0213 15:32:19.802616 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58517a95d37ac0d4c544954d18cec0f6-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-1-6-ce8ef0549e\" (UID: \"58517a95d37ac0d4c544954d18cec0f6\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:19.802793 kubelet[2821]: I0213 15:32:19.802638 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/113024124467c3d4afa32dd72de5c3a5-ca-certs\") pod \"kube-apiserver-ci-4186-1-1-6-ce8ef0549e\" (UID: \"113024124467c3d4afa32dd72de5c3a5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:20.369239 kubelet[2821]: I0213 15:32:20.369162 2821 apiserver.go:52] "Watching apiserver" Feb 13 15:32:20.400800 kubelet[2821]: I0213 15:32:20.400760 2821 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:32:20.533678 kubelet[2821]: E0213 15:32:20.533029 2821 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186-1-1-6-ce8ef0549e\" already exists" pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:20.556522 kubelet[2821]: E0213 15:32:20.556464 2821 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-1-1-6-ce8ef0549e\" already exists" pod="kube-system/kube-apiserver-ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:20.600790 kubelet[2821]: I0213 15:32:20.600327 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-1-6-ce8ef0549e" podStartSLOduration=1.600272802 podStartE2EDuration="1.600272802s" podCreationTimestamp="2025-02-13 15:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:20.56603132 +0000 UTC m=+1.298947091" watchObservedRunningTime="2025-02-13 15:32:20.600272802 +0000 UTC m=+1.333188533" Feb 13 15:32:20.643684 kubelet[2821]: I0213 15:32:20.643433 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-1-6-ce8ef0549e" podStartSLOduration=1.6433871469999999 podStartE2EDuration="1.643387147s" podCreationTimestamp="2025-02-13 15:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:20.600652246 +0000 UTC m=+1.333568017" watchObservedRunningTime="2025-02-13 15:32:20.643387147 +0000 UTC m=+1.376302918" Feb 13 15:32:20.698645 kubelet[2821]: I0213 15:32:20.698582 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-1-6-ce8ef0549e" podStartSLOduration=1.69742338 podStartE2EDuration="1.69742338s" podCreationTimestamp="2025-02-13 15:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:20.644970965 +0000 UTC m=+1.377886696" watchObservedRunningTime="2025-02-13 15:32:20.69742338 +0000 UTC m=+1.430339151" Feb 13 15:32:24.381683 sudo[1891]: pam_unix(sudo:session): session closed for user root Feb 13 15:32:24.542657 sshd[1890]: Connection closed by 139.178.89.65 port 39824 Feb 13 15:32:24.543732 sshd-session[1888]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:24.549946 systemd[1]: sshd@7-142.132.179.183:22-139.178.89.65:39824.service: Deactivated successfully. Feb 13 15:32:24.552611 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:32:24.554436 systemd[1]: session-7.scope: Consumed 7.247s CPU time, 188.5M memory peak, 0B memory swap peak. Feb 13 15:32:24.556456 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:32:24.558237 systemd-logind[1466]: Removed session 7. Feb 13 15:32:33.131967 kubelet[2821]: I0213 15:32:33.131927 2821 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:32:33.133098 kubelet[2821]: I0213 15:32:33.133066 2821 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:32:33.133137 containerd[1492]: time="2025-02-13T15:32:33.132740001Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:32:34.145860 kubelet[2821]: I0213 15:32:34.145813 2821 topology_manager.go:215] "Topology Admit Handler" podUID="43a225a3-f66b-4703-8bb5-e848ba84b75d" podNamespace="kube-system" podName="kube-proxy-q49gk" Feb 13 15:32:34.158976 systemd[1]: Created slice kubepods-besteffort-pod43a225a3_f66b_4703_8bb5_e848ba84b75d.slice - libcontainer container kubepods-besteffort-pod43a225a3_f66b_4703_8bb5_e848ba84b75d.slice. Feb 13 15:32:34.196833 kubelet[2821]: I0213 15:32:34.196217 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43a225a3-f66b-4703-8bb5-e848ba84b75d-lib-modules\") pod \"kube-proxy-q49gk\" (UID: \"43a225a3-f66b-4703-8bb5-e848ba84b75d\") " pod="kube-system/kube-proxy-q49gk" Feb 13 15:32:34.196833 kubelet[2821]: I0213 15:32:34.196307 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43a225a3-f66b-4703-8bb5-e848ba84b75d-kube-proxy\") pod \"kube-proxy-q49gk\" (UID: \"43a225a3-f66b-4703-8bb5-e848ba84b75d\") " pod="kube-system/kube-proxy-q49gk" Feb 13 15:32:34.196833 kubelet[2821]: I0213 15:32:34.196420 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43a225a3-f66b-4703-8bb5-e848ba84b75d-xtables-lock\") pod \"kube-proxy-q49gk\" (UID: \"43a225a3-f66b-4703-8bb5-e848ba84b75d\") " pod="kube-system/kube-proxy-q49gk" Feb 13 15:32:34.196833 kubelet[2821]: I0213 15:32:34.196721 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88d29\" (UniqueName: \"kubernetes.io/projected/43a225a3-f66b-4703-8bb5-e848ba84b75d-kube-api-access-88d29\") pod \"kube-proxy-q49gk\" (UID: \"43a225a3-f66b-4703-8bb5-e848ba84b75d\") " pod="kube-system/kube-proxy-q49gk" Feb 13 15:32:34.288022 kubelet[2821]: I0213 15:32:34.287844 2821 topology_manager.go:215] "Topology Admit Handler" podUID="49c41b27-698c-4ff0-8278-54529621f9bc" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-dtv65" Feb 13 15:32:34.297195 kubelet[2821]: I0213 15:32:34.297082 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbkdj\" (UniqueName: \"kubernetes.io/projected/49c41b27-698c-4ff0-8278-54529621f9bc-kube-api-access-fbkdj\") pod \"tigera-operator-c7ccbd65-dtv65\" (UID: \"49c41b27-698c-4ff0-8278-54529621f9bc\") " pod="tigera-operator/tigera-operator-c7ccbd65-dtv65" Feb 13 15:32:34.297945 kubelet[2821]: I0213 15:32:34.297770 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49c41b27-698c-4ff0-8278-54529621f9bc-var-lib-calico\") pod \"tigera-operator-c7ccbd65-dtv65\" (UID: \"49c41b27-698c-4ff0-8278-54529621f9bc\") " pod="tigera-operator/tigera-operator-c7ccbd65-dtv65" Feb 13 15:32:34.300248 systemd[1]: Created slice kubepods-besteffort-pod49c41b27_698c_4ff0_8278_54529621f9bc.slice - libcontainer container kubepods-besteffort-pod49c41b27_698c_4ff0_8278_54529621f9bc.slice. Feb 13 15:32:34.472954 containerd[1492]: time="2025-02-13T15:32:34.472740265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q49gk,Uid:43a225a3-f66b-4703-8bb5-e848ba84b75d,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:34.505280 containerd[1492]: time="2025-02-13T15:32:34.504989887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:34.505280 containerd[1492]: time="2025-02-13T15:32:34.505051128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:34.505280 containerd[1492]: time="2025-02-13T15:32:34.505066448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:34.505280 containerd[1492]: time="2025-02-13T15:32:34.505169089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:34.531625 systemd[1]: Started cri-containerd-e4805b5d11e539b5f9628d3c1d6c0b3d34dbeac4287ccc3663e1d0e24277a8c6.scope - libcontainer container e4805b5d11e539b5f9628d3c1d6c0b3d34dbeac4287ccc3663e1d0e24277a8c6. Feb 13 15:32:34.560812 containerd[1492]: time="2025-02-13T15:32:34.560762809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q49gk,Uid:43a225a3-f66b-4703-8bb5-e848ba84b75d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4805b5d11e539b5f9628d3c1d6c0b3d34dbeac4287ccc3663e1d0e24277a8c6\"" Feb 13 15:32:34.567079 containerd[1492]: time="2025-02-13T15:32:34.566965747Z" level=info msg="CreateContainer within sandbox \"e4805b5d11e539b5f9628d3c1d6c0b3d34dbeac4287ccc3663e1d0e24277a8c6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:32:34.584733 containerd[1492]: time="2025-02-13T15:32:34.584615113Z" level=info msg="CreateContainer within sandbox \"e4805b5d11e539b5f9628d3c1d6c0b3d34dbeac4287ccc3663e1d0e24277a8c6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a5c34635b2772f6e48f5eb92125f09ef5cb1652ef4ca5c027f94ec77a76fb557\"" Feb 13 15:32:34.585691 containerd[1492]: time="2025-02-13T15:32:34.585614922Z" level=info msg="StartContainer for \"a5c34635b2772f6e48f5eb92125f09ef5cb1652ef4ca5c027f94ec77a76fb557\"" Feb 13 15:32:34.608261 containerd[1492]: time="2025-02-13T15:32:34.608112773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-dtv65,Uid:49c41b27-698c-4ff0-8278-54529621f9bc,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:32:34.619715 systemd[1]: Started cri-containerd-a5c34635b2772f6e48f5eb92125f09ef5cb1652ef4ca5c027f94ec77a76fb557.scope - libcontainer container a5c34635b2772f6e48f5eb92125f09ef5cb1652ef4ca5c027f94ec77a76fb557. Feb 13 15:32:34.648638 containerd[1492]: time="2025-02-13T15:32:34.647843145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:34.648638 containerd[1492]: time="2025-02-13T15:32:34.647922425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:34.648638 containerd[1492]: time="2025-02-13T15:32:34.647934905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:34.648638 containerd[1492]: time="2025-02-13T15:32:34.648053587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:34.672989 systemd[1]: Started cri-containerd-dda144a600ef81815ad945f89c62ef63c4a429595f0cbe0a3281d4702eb7a32b.scope - libcontainer container dda144a600ef81815ad945f89c62ef63c4a429595f0cbe0a3281d4702eb7a32b. Feb 13 15:32:34.674887 containerd[1492]: time="2025-02-13T15:32:34.673867308Z" level=info msg="StartContainer for \"a5c34635b2772f6e48f5eb92125f09ef5cb1652ef4ca5c027f94ec77a76fb557\" returns successfully" Feb 13 15:32:34.721322 containerd[1492]: time="2025-02-13T15:32:34.721208992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-dtv65,Uid:49c41b27-698c-4ff0-8278-54529621f9bc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dda144a600ef81815ad945f89c62ef63c4a429595f0cbe0a3281d4702eb7a32b\"" Feb 13 15:32:34.725043 containerd[1492]: time="2025-02-13T15:32:34.724901306Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:32:35.538540 kubelet[2821]: I0213 15:32:35.538191 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-q49gk" podStartSLOduration=1.53814773 podStartE2EDuration="1.53814773s" podCreationTimestamp="2025-02-13 15:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:32:35.538023169 +0000 UTC m=+16.270938940" watchObservedRunningTime="2025-02-13 15:32:35.53814773 +0000 UTC m=+16.271063461" Feb 13 15:32:36.467032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143404948.mount: Deactivated successfully. Feb 13 15:32:36.780964 containerd[1492]: time="2025-02-13T15:32:36.780035096Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:36.780964 containerd[1492]: time="2025-02-13T15:32:36.780886864Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 15:32:36.781980 containerd[1492]: time="2025-02-13T15:32:36.781915554Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:36.785408 containerd[1492]: time="2025-02-13T15:32:36.785265264Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:36.786493 containerd[1492]: time="2025-02-13T15:32:36.786434995Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.061484969s" Feb 13 15:32:36.786493 containerd[1492]: time="2025-02-13T15:32:36.786472115Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 15:32:36.792429 containerd[1492]: time="2025-02-13T15:32:36.791921965Z" level=info msg="CreateContainer within sandbox \"dda144a600ef81815ad945f89c62ef63c4a429595f0cbe0a3281d4702eb7a32b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:32:36.807215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3859475554.mount: Deactivated successfully. Feb 13 15:32:36.812409 containerd[1492]: time="2025-02-13T15:32:36.812289230Z" level=info msg="CreateContainer within sandbox \"dda144a600ef81815ad945f89c62ef63c4a429595f0cbe0a3281d4702eb7a32b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01\"" Feb 13 15:32:36.815131 containerd[1492]: time="2025-02-13T15:32:36.815084336Z" level=info msg="StartContainer for \"72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01\"" Feb 13 15:32:36.851040 systemd[1]: Started cri-containerd-72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01.scope - libcontainer container 72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01. Feb 13 15:32:36.891282 containerd[1492]: time="2025-02-13T15:32:36.891207309Z" level=info msg="StartContainer for \"72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01\" returns successfully" Feb 13 15:32:42.499668 kubelet[2821]: I0213 15:32:42.497901 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-dtv65" podStartSLOduration=6.434048055 podStartE2EDuration="8.497852564s" podCreationTimestamp="2025-02-13 15:32:34 +0000 UTC" firstStartedPulling="2025-02-13 15:32:34.722932848 +0000 UTC m=+15.455848619" lastFinishedPulling="2025-02-13 15:32:36.786737317 +0000 UTC m=+17.519653128" observedRunningTime="2025-02-13 15:32:37.548687827 +0000 UTC m=+18.281603598" watchObservedRunningTime="2025-02-13 15:32:42.497852564 +0000 UTC m=+23.230768335" Feb 13 15:32:42.499668 kubelet[2821]: I0213 15:32:42.498122 2821 topology_manager.go:215] "Topology Admit Handler" podUID="ce36a984-6da1-4a67-916c-f85726be7b9d" podNamespace="calico-system" podName="calico-typha-f8d6b77cb-cg6d4" Feb 13 15:32:42.507937 systemd[1]: Created slice kubepods-besteffort-podce36a984_6da1_4a67_916c_f85726be7b9d.slice - libcontainer container kubepods-besteffort-podce36a984_6da1_4a67_916c_f85726be7b9d.slice. Feb 13 15:32:42.560090 kubelet[2821]: I0213 15:32:42.559914 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ce36a984-6da1-4a67-916c-f85726be7b9d-typha-certs\") pod \"calico-typha-f8d6b77cb-cg6d4\" (UID: \"ce36a984-6da1-4a67-916c-f85726be7b9d\") " pod="calico-system/calico-typha-f8d6b77cb-cg6d4" Feb 13 15:32:42.560090 kubelet[2821]: I0213 15:32:42.559973 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d85hw\" (UniqueName: \"kubernetes.io/projected/ce36a984-6da1-4a67-916c-f85726be7b9d-kube-api-access-d85hw\") pod \"calico-typha-f8d6b77cb-cg6d4\" (UID: \"ce36a984-6da1-4a67-916c-f85726be7b9d\") " pod="calico-system/calico-typha-f8d6b77cb-cg6d4" Feb 13 15:32:42.560090 kubelet[2821]: I0213 15:32:42.559997 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce36a984-6da1-4a67-916c-f85726be7b9d-tigera-ca-bundle\") pod \"calico-typha-f8d6b77cb-cg6d4\" (UID: \"ce36a984-6da1-4a67-916c-f85726be7b9d\") " pod="calico-system/calico-typha-f8d6b77cb-cg6d4" Feb 13 15:32:42.681786 kubelet[2821]: I0213 15:32:42.680858 2821 topology_manager.go:215] "Topology Admit Handler" podUID="2d4629fa-3af1-4f00-8836-94393fc1dd4c" podNamespace="calico-system" podName="calico-node-lv2tx" Feb 13 15:32:42.691134 systemd[1]: Created slice kubepods-besteffort-pod2d4629fa_3af1_4f00_8836_94393fc1dd4c.slice - libcontainer container kubepods-besteffort-pod2d4629fa_3af1_4f00_8836_94393fc1dd4c.slice. Feb 13 15:32:42.763244 kubelet[2821]: I0213 15:32:42.761910 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2d4629fa-3af1-4f00-8836-94393fc1dd4c-cni-log-dir\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.763755 kubelet[2821]: I0213 15:32:42.763719 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2d4629fa-3af1-4f00-8836-94393fc1dd4c-flexvol-driver-host\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.763974 kubelet[2821]: I0213 15:32:42.763953 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2d4629fa-3af1-4f00-8836-94393fc1dd4c-var-lib-calico\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.764569 kubelet[2821]: I0213 15:32:42.764194 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d4629fa-3af1-4f00-8836-94393fc1dd4c-xtables-lock\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.764569 kubelet[2821]: I0213 15:32:42.764263 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2d4629fa-3af1-4f00-8836-94393fc1dd4c-policysync\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.764569 kubelet[2821]: I0213 15:32:42.764313 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2d4629fa-3af1-4f00-8836-94393fc1dd4c-node-certs\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.764569 kubelet[2821]: I0213 15:32:42.764494 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2d4629fa-3af1-4f00-8836-94393fc1dd4c-var-run-calico\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.764569 kubelet[2821]: I0213 15:32:42.764561 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g79c\" (UniqueName: \"kubernetes.io/projected/2d4629fa-3af1-4f00-8836-94393fc1dd4c-kube-api-access-8g79c\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.764798 kubelet[2821]: I0213 15:32:42.764602 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d4629fa-3af1-4f00-8836-94393fc1dd4c-lib-modules\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.764798 kubelet[2821]: I0213 15:32:42.764638 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d4629fa-3af1-4f00-8836-94393fc1dd4c-tigera-ca-bundle\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.764798 kubelet[2821]: I0213 15:32:42.764660 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2d4629fa-3af1-4f00-8836-94393fc1dd4c-cni-bin-dir\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.764798 kubelet[2821]: I0213 15:32:42.764683 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2d4629fa-3af1-4f00-8836-94393fc1dd4c-cni-net-dir\") pod \"calico-node-lv2tx\" (UID: \"2d4629fa-3af1-4f00-8836-94393fc1dd4c\") " pod="calico-system/calico-node-lv2tx" Feb 13 15:32:42.814647 containerd[1492]: time="2025-02-13T15:32:42.813892986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f8d6b77cb-cg6d4,Uid:ce36a984-6da1-4a67-916c-f85726be7b9d,Namespace:calico-system,Attempt:0,}" Feb 13 15:32:42.881545 kubelet[2821]: E0213 15:32:42.881318 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.881545 kubelet[2821]: W0213 15:32:42.881344 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.881545 kubelet[2821]: E0213 15:32:42.881381 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.883301 containerd[1492]: time="2025-02-13T15:32:42.883166489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:42.883301 containerd[1492]: time="2025-02-13T15:32:42.883228970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:42.883301 containerd[1492]: time="2025-02-13T15:32:42.883240970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:42.884122 containerd[1492]: time="2025-02-13T15:32:42.883805534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:42.886175 kubelet[2821]: I0213 15:32:42.885745 2821 topology_manager.go:215] "Topology Admit Handler" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" podNamespace="calico-system" podName="csi-node-driver-6lfhr" Feb 13 15:32:42.886903 kubelet[2821]: E0213 15:32:42.886801 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:42.907526 kubelet[2821]: E0213 15:32:42.907320 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.907526 kubelet[2821]: W0213 15:32:42.907424 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.907526 kubelet[2821]: E0213 15:32:42.907448 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.922578 systemd[1]: Started cri-containerd-fcc077c8bcb0104c3659bb12352024ea9257200613d09e6570efeb1cf3bfc174.scope - libcontainer container fcc077c8bcb0104c3659bb12352024ea9257200613d09e6570efeb1cf3bfc174. Feb 13 15:32:42.950882 kubelet[2821]: E0213 15:32:42.950850 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.951522 kubelet[2821]: W0213 15:32:42.951022 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.951522 kubelet[2821]: E0213 15:32:42.951053 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.951755 kubelet[2821]: E0213 15:32:42.951737 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.951950 kubelet[2821]: W0213 15:32:42.951931 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.952295 kubelet[2821]: E0213 15:32:42.952112 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.952734 kubelet[2821]: E0213 15:32:42.952610 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.952734 kubelet[2821]: W0213 15:32:42.952627 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.952734 kubelet[2821]: E0213 15:32:42.952642 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.953191 kubelet[2821]: E0213 15:32:42.953121 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.953309 kubelet[2821]: W0213 15:32:42.953278 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.953524 kubelet[2821]: E0213 15:32:42.953500 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.955210 kubelet[2821]: E0213 15:32:42.954055 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.955210 kubelet[2821]: W0213 15:32:42.954109 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.955210 kubelet[2821]: E0213 15:32:42.954124 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.955680 kubelet[2821]: E0213 15:32:42.955623 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.955680 kubelet[2821]: W0213 15:32:42.955638 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.955680 kubelet[2821]: E0213 15:32:42.955653 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.956120 kubelet[2821]: E0213 15:32:42.955957 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.956120 kubelet[2821]: W0213 15:32:42.955970 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.956120 kubelet[2821]: E0213 15:32:42.955983 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.956448 kubelet[2821]: E0213 15:32:42.956335 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.956448 kubelet[2821]: W0213 15:32:42.956370 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.956448 kubelet[2821]: E0213 15:32:42.956387 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.956970 kubelet[2821]: E0213 15:32:42.956817 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.956970 kubelet[2821]: W0213 15:32:42.956830 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.956970 kubelet[2821]: E0213 15:32:42.956849 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.957862 kubelet[2821]: E0213 15:32:42.957470 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.957862 kubelet[2821]: W0213 15:32:42.957488 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.957862 kubelet[2821]: E0213 15:32:42.957503 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.958078 kubelet[2821]: E0213 15:32:42.957974 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.958078 kubelet[2821]: W0213 15:32:42.957984 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.958078 kubelet[2821]: E0213 15:32:42.957996 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.958701 kubelet[2821]: E0213 15:32:42.958626 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.958701 kubelet[2821]: W0213 15:32:42.958639 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.958701 kubelet[2821]: E0213 15:32:42.958657 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.959147 kubelet[2821]: E0213 15:32:42.959133 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.959365 kubelet[2821]: W0213 15:32:42.959290 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.959365 kubelet[2821]: E0213 15:32:42.959311 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.961599 kubelet[2821]: E0213 15:32:42.961577 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.961599 kubelet[2821]: W0213 15:32:42.961598 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.961724 kubelet[2821]: E0213 15:32:42.961614 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.961798 kubelet[2821]: E0213 15:32:42.961787 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.961798 kubelet[2821]: W0213 15:32:42.961797 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.961933 kubelet[2821]: E0213 15:32:42.961814 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.962097 kubelet[2821]: E0213 15:32:42.962085 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.962097 kubelet[2821]: W0213 15:32:42.962096 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.962162 kubelet[2821]: E0213 15:32:42.962108 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.962283 kubelet[2821]: E0213 15:32:42.962273 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.962317 kubelet[2821]: W0213 15:32:42.962283 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.962317 kubelet[2821]: E0213 15:32:42.962299 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.962678 kubelet[2821]: E0213 15:32:42.962662 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.962678 kubelet[2821]: W0213 15:32:42.962674 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.962737 kubelet[2821]: E0213 15:32:42.962687 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.962860 kubelet[2821]: E0213 15:32:42.962843 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.962860 kubelet[2821]: W0213 15:32:42.962859 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.962922 kubelet[2821]: E0213 15:32:42.962869 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.962997 kubelet[2821]: E0213 15:32:42.962988 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.963043 kubelet[2821]: W0213 15:32:42.962997 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.963043 kubelet[2821]: E0213 15:32:42.963007 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.967022 kubelet[2821]: E0213 15:32:42.966853 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.967022 kubelet[2821]: W0213 15:32:42.966874 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.967022 kubelet[2821]: E0213 15:32:42.966897 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.967022 kubelet[2821]: I0213 15:32:42.966929 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/548c309d-1177-42c0-887f-c4ea253c82f9-registration-dir\") pod \"csi-node-driver-6lfhr\" (UID: \"548c309d-1177-42c0-887f-c4ea253c82f9\") " pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:42.967449 kubelet[2821]: E0213 15:32:42.967431 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.967687 kubelet[2821]: W0213 15:32:42.967518 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.967687 kubelet[2821]: E0213 15:32:42.967537 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.967687 kubelet[2821]: I0213 15:32:42.967559 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/548c309d-1177-42c0-887f-c4ea253c82f9-varrun\") pod \"csi-node-driver-6lfhr\" (UID: \"548c309d-1177-42c0-887f-c4ea253c82f9\") " pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:42.968189 kubelet[2821]: E0213 15:32:42.968015 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.968189 kubelet[2821]: W0213 15:32:42.968037 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.968189 kubelet[2821]: E0213 15:32:42.968060 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.968189 kubelet[2821]: I0213 15:32:42.968085 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/548c309d-1177-42c0-887f-c4ea253c82f9-kubelet-dir\") pod \"csi-node-driver-6lfhr\" (UID: \"548c309d-1177-42c0-887f-c4ea253c82f9\") " pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:42.968645 kubelet[2821]: E0213 15:32:42.968488 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.968645 kubelet[2821]: W0213 15:32:42.968527 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.968645 kubelet[2821]: E0213 15:32:42.968551 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.968645 kubelet[2821]: I0213 15:32:42.968580 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkgd5\" (UniqueName: \"kubernetes.io/projected/548c309d-1177-42c0-887f-c4ea253c82f9-kube-api-access-lkgd5\") pod \"csi-node-driver-6lfhr\" (UID: \"548c309d-1177-42c0-887f-c4ea253c82f9\") " pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:42.969541 kubelet[2821]: E0213 15:32:42.969222 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.969541 kubelet[2821]: W0213 15:32:42.969238 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.969541 kubelet[2821]: E0213 15:32:42.969260 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.969541 kubelet[2821]: I0213 15:32:42.969283 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/548c309d-1177-42c0-887f-c4ea253c82f9-socket-dir\") pod \"csi-node-driver-6lfhr\" (UID: \"548c309d-1177-42c0-887f-c4ea253c82f9\") " pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:42.969865 kubelet[2821]: E0213 15:32:42.969844 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.969865 kubelet[2821]: W0213 15:32:42.969862 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.970038 kubelet[2821]: E0213 15:32:42.969885 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.970428 kubelet[2821]: E0213 15:32:42.970408 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.970428 kubelet[2821]: W0213 15:32:42.970425 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.970600 kubelet[2821]: E0213 15:32:42.970513 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.971002 kubelet[2821]: E0213 15:32:42.970978 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.971002 kubelet[2821]: W0213 15:32:42.970997 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.971238 kubelet[2821]: E0213 15:32:42.971075 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.971694 kubelet[2821]: E0213 15:32:42.971647 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.971694 kubelet[2821]: W0213 15:32:42.971684 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.971942 kubelet[2821]: E0213 15:32:42.971854 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.972407 kubelet[2821]: E0213 15:32:42.972331 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.972407 kubelet[2821]: W0213 15:32:42.972388 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.972542 kubelet[2821]: E0213 15:32:42.972485 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.972599 kubelet[2821]: E0213 15:32:42.972589 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.972599 kubelet[2821]: W0213 15:32:42.972598 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.972693 kubelet[2821]: E0213 15:32:42.972642 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.973475 kubelet[2821]: E0213 15:32:42.973435 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.973475 kubelet[2821]: W0213 15:32:42.973455 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.973475 kubelet[2821]: E0213 15:32:42.973473 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.973733 kubelet[2821]: E0213 15:32:42.973719 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.973733 kubelet[2821]: W0213 15:32:42.973731 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.973807 kubelet[2821]: E0213 15:32:42.973745 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.973948 kubelet[2821]: E0213 15:32:42.973936 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.973948 kubelet[2821]: W0213 15:32:42.973948 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.974151 kubelet[2821]: E0213 15:32:42.973963 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:42.974151 kubelet[2821]: E0213 15:32:42.974148 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:42.974213 kubelet[2821]: W0213 15:32:42.974157 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:42.974213 kubelet[2821]: E0213 15:32:42.974173 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.004607 containerd[1492]: time="2025-02-13T15:32:43.004245188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lv2tx,Uid:2d4629fa-3af1-4f00-8836-94393fc1dd4c,Namespace:calico-system,Attempt:0,}" Feb 13 15:32:43.008579 containerd[1492]: time="2025-02-13T15:32:43.006812010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f8d6b77cb-cg6d4,Uid:ce36a984-6da1-4a67-916c-f85726be7b9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"fcc077c8bcb0104c3659bb12352024ea9257200613d09e6570efeb1cf3bfc174\"" Feb 13 15:32:43.012589 containerd[1492]: time="2025-02-13T15:32:43.012542217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:32:43.047962 containerd[1492]: time="2025-02-13T15:32:43.047744550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:43.049750 containerd[1492]: time="2025-02-13T15:32:43.047865231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:43.049750 containerd[1492]: time="2025-02-13T15:32:43.047882311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:43.050103 containerd[1492]: time="2025-02-13T15:32:43.049989969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:43.071608 kubelet[2821]: E0213 15:32:43.071467 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.071608 kubelet[2821]: W0213 15:32:43.071493 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.071608 kubelet[2821]: E0213 15:32:43.071553 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.072947 kubelet[2821]: E0213 15:32:43.072051 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.072947 kubelet[2821]: W0213 15:32:43.072072 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.072947 kubelet[2821]: E0213 15:32:43.072133 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.072947 kubelet[2821]: E0213 15:32:43.072507 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.072947 kubelet[2821]: W0213 15:32:43.072550 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.072947 kubelet[2821]: E0213 15:32:43.072579 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.073379 systemd[1]: Started cri-containerd-7d0a071b852ecd66d2bd48d6e818ef317a4111545642221ea46ef2178e55756b.scope - libcontainer container 7d0a071b852ecd66d2bd48d6e818ef317a4111545642221ea46ef2178e55756b. Feb 13 15:32:43.075431 kubelet[2821]: E0213 15:32:43.073935 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.075431 kubelet[2821]: W0213 15:32:43.073953 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.075431 kubelet[2821]: E0213 15:32:43.073976 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.076119 kubelet[2821]: E0213 15:32:43.075662 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.076119 kubelet[2821]: W0213 15:32:43.075796 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.076119 kubelet[2821]: E0213 15:32:43.075821 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.077729 kubelet[2821]: E0213 15:32:43.077604 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.077729 kubelet[2821]: W0213 15:32:43.077628 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.077729 kubelet[2821]: E0213 15:32:43.077691 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.078079 kubelet[2821]: E0213 15:32:43.078063 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.078183 kubelet[2821]: W0213 15:32:43.078078 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.078183 kubelet[2821]: E0213 15:32:43.078141 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.078979 kubelet[2821]: E0213 15:32:43.078607 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.078979 kubelet[2821]: W0213 15:32:43.078627 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.078979 kubelet[2821]: E0213 15:32:43.078793 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.078979 kubelet[2821]: E0213 15:32:43.078801 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.078979 kubelet[2821]: W0213 15:32:43.078801 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.079289 kubelet[2821]: E0213 15:32:43.079275 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.079465 kubelet[2821]: E0213 15:32:43.079450 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.079545 kubelet[2821]: W0213 15:32:43.079530 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.079644 kubelet[2821]: E0213 15:32:43.079616 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.079870 kubelet[2821]: E0213 15:32:43.079843 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.081166 kubelet[2821]: W0213 15:32:43.081124 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.081479 kubelet[2821]: E0213 15:32:43.081445 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.081714 kubelet[2821]: E0213 15:32:43.081698 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.081874 kubelet[2821]: W0213 15:32:43.081788 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.081874 kubelet[2821]: E0213 15:32:43.081854 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.082332 kubelet[2821]: E0213 15:32:43.082243 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.082332 kubelet[2821]: W0213 15:32:43.082260 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.082332 kubelet[2821]: E0213 15:32:43.082296 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.083334 kubelet[2821]: E0213 15:32:43.082831 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.083334 kubelet[2821]: W0213 15:32:43.082846 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.083334 kubelet[2821]: E0213 15:32:43.082907 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.083954 kubelet[2821]: E0213 15:32:43.083588 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.083954 kubelet[2821]: W0213 15:32:43.083604 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.083954 kubelet[2821]: E0213 15:32:43.083668 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.086500 kubelet[2821]: E0213 15:32:43.084276 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.086500 kubelet[2821]: W0213 15:32:43.084293 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.086500 kubelet[2821]: E0213 15:32:43.084383 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.087208 kubelet[2821]: E0213 15:32:43.087048 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.087208 kubelet[2821]: W0213 15:32:43.087071 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.087208 kubelet[2821]: E0213 15:32:43.087139 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.087658 kubelet[2821]: E0213 15:32:43.087552 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.087658 kubelet[2821]: W0213 15:32:43.087569 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.087658 kubelet[2821]: E0213 15:32:43.087615 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.088146 kubelet[2821]: E0213 15:32:43.088001 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.088146 kubelet[2821]: W0213 15:32:43.088020 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.088146 kubelet[2821]: E0213 15:32:43.088080 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.088904 kubelet[2821]: E0213 15:32:43.088814 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.088904 kubelet[2821]: W0213 15:32:43.088837 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.088904 kubelet[2821]: E0213 15:32:43.088900 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.090058 kubelet[2821]: E0213 15:32:43.089945 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.090058 kubelet[2821]: W0213 15:32:43.089966 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.090141 kubelet[2821]: E0213 15:32:43.090100 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.091141 kubelet[2821]: E0213 15:32:43.091061 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.091141 kubelet[2821]: W0213 15:32:43.091079 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.091216 kubelet[2821]: E0213 15:32:43.091168 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.092250 kubelet[2821]: E0213 15:32:43.092089 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.092250 kubelet[2821]: W0213 15:32:43.092108 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.092250 kubelet[2821]: E0213 15:32:43.092200 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.093810 kubelet[2821]: E0213 15:32:43.093559 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.093810 kubelet[2821]: W0213 15:32:43.093595 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.093810 kubelet[2821]: E0213 15:32:43.093616 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.095002 kubelet[2821]: E0213 15:32:43.094976 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.095117 kubelet[2821]: W0213 15:32:43.095100 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.095234 kubelet[2821]: E0213 15:32:43.095177 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.100794 kubelet[2821]: E0213 15:32:43.100752 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:43.100794 kubelet[2821]: W0213 15:32:43.100786 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:43.100959 kubelet[2821]: E0213 15:32:43.100809 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:43.115406 containerd[1492]: time="2025-02-13T15:32:43.115284192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lv2tx,Uid:2d4629fa-3af1-4f00-8836-94393fc1dd4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d0a071b852ecd66d2bd48d6e818ef317a4111545642221ea46ef2178e55756b\"" Feb 13 15:32:44.423043 kubelet[2821]: E0213 15:32:44.422397 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:44.768608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1837109048.mount: Deactivated successfully. Feb 13 15:32:45.269442 containerd[1492]: time="2025-02-13T15:32:45.268981399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:45.270553 containerd[1492]: time="2025-02-13T15:32:45.270085968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 15:32:45.273142 containerd[1492]: time="2025-02-13T15:32:45.273088512Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:45.276177 containerd[1492]: time="2025-02-13T15:32:45.276100737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:45.277628 containerd[1492]: time="2025-02-13T15:32:45.276671382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.264084443s" Feb 13 15:32:45.277628 containerd[1492]: time="2025-02-13T15:32:45.276704742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 15:32:45.277628 containerd[1492]: time="2025-02-13T15:32:45.277375867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:32:45.294074 containerd[1492]: time="2025-02-13T15:32:45.294015282Z" level=info msg="CreateContainer within sandbox \"fcc077c8bcb0104c3659bb12352024ea9257200613d09e6570efeb1cf3bfc174\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:32:45.321899 containerd[1492]: time="2025-02-13T15:32:45.321842749Z" level=info msg="CreateContainer within sandbox \"fcc077c8bcb0104c3659bb12352024ea9257200613d09e6570efeb1cf3bfc174\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b638d9f0c10665d6d3d89f68c8404c90e27c1f2160d77d79455170d163e8c097\"" Feb 13 15:32:45.325033 containerd[1492]: time="2025-02-13T15:32:45.323753844Z" level=info msg="StartContainer for \"b638d9f0c10665d6d3d89f68c8404c90e27c1f2160d77d79455170d163e8c097\"" Feb 13 15:32:45.354819 systemd[1]: Started cri-containerd-b638d9f0c10665d6d3d89f68c8404c90e27c1f2160d77d79455170d163e8c097.scope - libcontainer container b638d9f0c10665d6d3d89f68c8404c90e27c1f2160d77d79455170d163e8c097. Feb 13 15:32:45.416691 containerd[1492]: time="2025-02-13T15:32:45.416638199Z" level=info msg="StartContainer for \"b638d9f0c10665d6d3d89f68c8404c90e27c1f2160d77d79455170d163e8c097\" returns successfully" Feb 13 15:32:45.583785 kubelet[2821]: E0213 15:32:45.583670 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.583785 kubelet[2821]: W0213 15:32:45.583696 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.583785 kubelet[2821]: E0213 15:32:45.583719 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.584157 kubelet[2821]: E0213 15:32:45.584030 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.584157 kubelet[2821]: W0213 15:32:45.584039 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.584157 kubelet[2821]: E0213 15:32:45.584052 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.585234 kubelet[2821]: E0213 15:32:45.585193 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.585234 kubelet[2821]: W0213 15:32:45.585215 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.585234 kubelet[2821]: E0213 15:32:45.585233 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.586065 kubelet[2821]: E0213 15:32:45.585686 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.586065 kubelet[2821]: W0213 15:32:45.585706 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.586065 kubelet[2821]: E0213 15:32:45.585721 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.586065 kubelet[2821]: E0213 15:32:45.585924 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.586065 kubelet[2821]: W0213 15:32:45.585932 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.586065 kubelet[2821]: E0213 15:32:45.585944 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.586277 kubelet[2821]: E0213 15:32:45.586256 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.586277 kubelet[2821]: W0213 15:32:45.586274 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.586321 kubelet[2821]: E0213 15:32:45.586287 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.586681 kubelet[2821]: E0213 15:32:45.586641 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.586681 kubelet[2821]: W0213 15:32:45.586667 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.586681 kubelet[2821]: E0213 15:32:45.586680 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.587125 kubelet[2821]: E0213 15:32:45.586999 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.587125 kubelet[2821]: W0213 15:32:45.587124 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.587206 kubelet[2821]: E0213 15:32:45.587142 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.588528 kubelet[2821]: E0213 15:32:45.587698 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.588528 kubelet[2821]: W0213 15:32:45.587718 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.588528 kubelet[2821]: E0213 15:32:45.587734 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.588528 kubelet[2821]: E0213 15:32:45.588215 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.588528 kubelet[2821]: W0213 15:32:45.588225 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.588528 kubelet[2821]: E0213 15:32:45.588244 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.588528 kubelet[2821]: E0213 15:32:45.588477 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.588528 kubelet[2821]: W0213 15:32:45.588486 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.588765 kubelet[2821]: E0213 15:32:45.588499 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.588842 kubelet[2821]: E0213 15:32:45.588815 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.588842 kubelet[2821]: W0213 15:32:45.588834 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.588898 kubelet[2821]: E0213 15:32:45.588847 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.589088 kubelet[2821]: E0213 15:32:45.589067 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.589088 kubelet[2821]: W0213 15:32:45.589080 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.589088 kubelet[2821]: E0213 15:32:45.589092 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.589880 kubelet[2821]: E0213 15:32:45.589560 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.589880 kubelet[2821]: W0213 15:32:45.589579 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.589880 kubelet[2821]: E0213 15:32:45.589593 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.590002 kubelet[2821]: E0213 15:32:45.589897 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.590002 kubelet[2821]: W0213 15:32:45.589908 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.590002 kubelet[2821]: E0213 15:32:45.589921 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.602696 kubelet[2821]: E0213 15:32:45.602264 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.602696 kubelet[2821]: W0213 15:32:45.602297 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.602696 kubelet[2821]: E0213 15:32:45.602319 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.603142 kubelet[2821]: E0213 15:32:45.602815 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.603142 kubelet[2821]: W0213 15:32:45.602830 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.603142 kubelet[2821]: E0213 15:32:45.602844 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.603229 kubelet[2821]: E0213 15:32:45.603158 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.603229 kubelet[2821]: W0213 15:32:45.603170 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.603229 kubelet[2821]: E0213 15:32:45.603203 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.604637 kubelet[2821]: E0213 15:32:45.603575 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.604637 kubelet[2821]: W0213 15:32:45.603594 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.604637 kubelet[2821]: E0213 15:32:45.603631 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.604637 kubelet[2821]: E0213 15:32:45.603824 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.604637 kubelet[2821]: W0213 15:32:45.603834 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.604637 kubelet[2821]: E0213 15:32:45.603846 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.604637 kubelet[2821]: E0213 15:32:45.604121 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.604637 kubelet[2821]: W0213 15:32:45.604132 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.604637 kubelet[2821]: E0213 15:32:45.604144 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.604935 kubelet[2821]: E0213 15:32:45.604338 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.604935 kubelet[2821]: W0213 15:32:45.604685 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.604935 kubelet[2821]: E0213 15:32:45.604711 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.605749 kubelet[2821]: E0213 15:32:45.605480 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.605749 kubelet[2821]: W0213 15:32:45.605500 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.605749 kubelet[2821]: E0213 15:32:45.605526 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.606325 kubelet[2821]: E0213 15:32:45.606202 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.606325 kubelet[2821]: W0213 15:32:45.606220 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.606325 kubelet[2821]: E0213 15:32:45.606236 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.606905 kubelet[2821]: E0213 15:32:45.606568 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.606905 kubelet[2821]: W0213 15:32:45.606680 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.606905 kubelet[2821]: E0213 15:32:45.606701 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.607125 kubelet[2821]: E0213 15:32:45.607105 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.607125 kubelet[2821]: W0213 15:32:45.607118 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.608093 kubelet[2821]: E0213 15:32:45.607721 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.608708 kubelet[2821]: E0213 15:32:45.608544 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.608708 kubelet[2821]: W0213 15:32:45.608672 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.608708 kubelet[2821]: E0213 15:32:45.608707 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.608968 kubelet[2821]: E0213 15:32:45.608952 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.608968 kubelet[2821]: W0213 15:32:45.608965 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.609520 kubelet[2821]: E0213 15:32:45.608983 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.609520 kubelet[2821]: E0213 15:32:45.609140 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.609520 kubelet[2821]: W0213 15:32:45.609149 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.609520 kubelet[2821]: E0213 15:32:45.609160 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.609520 kubelet[2821]: E0213 15:32:45.609490 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.609520 kubelet[2821]: W0213 15:32:45.609501 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.609520 kubelet[2821]: E0213 15:32:45.609520 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.609981 kubelet[2821]: E0213 15:32:45.609929 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.609981 kubelet[2821]: W0213 15:32:45.609945 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.610503 kubelet[2821]: E0213 15:32:45.610153 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.610503 kubelet[2821]: W0213 15:32:45.610162 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.610503 kubelet[2821]: E0213 15:32:45.610178 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.610503 kubelet[2821]: E0213 15:32:45.610217 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:45.610794 kubelet[2821]: E0213 15:32:45.610776 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:45.610794 kubelet[2821]: W0213 15:32:45.610793 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:45.610867 kubelet[2821]: E0213 15:32:45.610808 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.421864 kubelet[2821]: E0213 15:32:46.421798 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:46.560680 kubelet[2821]: I0213 15:32:46.559739 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:32:46.597931 kubelet[2821]: E0213 15:32:46.597886 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.597931 kubelet[2821]: W0213 15:32:46.597926 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.598525 kubelet[2821]: E0213 15:32:46.597964 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.598525 kubelet[2821]: E0213 15:32:46.598318 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.598525 kubelet[2821]: W0213 15:32:46.598339 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.598525 kubelet[2821]: E0213 15:32:46.598409 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.598769 kubelet[2821]: E0213 15:32:46.598743 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.598769 kubelet[2821]: W0213 15:32:46.598767 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.598874 kubelet[2821]: E0213 15:32:46.598790 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.599136 kubelet[2821]: E0213 15:32:46.599068 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.599136 kubelet[2821]: W0213 15:32:46.599117 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.599273 kubelet[2821]: E0213 15:32:46.599151 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.599557 kubelet[2821]: E0213 15:32:46.599530 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.599557 kubelet[2821]: W0213 15:32:46.599551 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.599690 kubelet[2821]: E0213 15:32:46.599578 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.599865 kubelet[2821]: E0213 15:32:46.599848 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.599905 kubelet[2821]: W0213 15:32:46.599868 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.599905 kubelet[2821]: E0213 15:32:46.599889 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.601544 kubelet[2821]: E0213 15:32:46.600154 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.601544 kubelet[2821]: W0213 15:32:46.600168 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.601544 kubelet[2821]: E0213 15:32:46.600183 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.601544 kubelet[2821]: E0213 15:32:46.600397 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.601544 kubelet[2821]: W0213 15:32:46.600407 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.601544 kubelet[2821]: E0213 15:32:46.600436 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.601544 kubelet[2821]: E0213 15:32:46.600666 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.601544 kubelet[2821]: W0213 15:32:46.600676 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.601544 kubelet[2821]: E0213 15:32:46.600689 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.601544 kubelet[2821]: E0213 15:32:46.600814 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.601905 kubelet[2821]: W0213 15:32:46.600820 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.601905 kubelet[2821]: E0213 15:32:46.600830 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.601905 kubelet[2821]: E0213 15:32:46.600940 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.601905 kubelet[2821]: W0213 15:32:46.600946 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.601905 kubelet[2821]: E0213 15:32:46.600957 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.601905 kubelet[2821]: E0213 15:32:46.601069 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.601905 kubelet[2821]: W0213 15:32:46.601075 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.601905 kubelet[2821]: E0213 15:32:46.601084 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.601905 kubelet[2821]: E0213 15:32:46.601212 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.601905 kubelet[2821]: W0213 15:32:46.601219 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.602105 kubelet[2821]: E0213 15:32:46.601228 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.602105 kubelet[2821]: E0213 15:32:46.601337 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.602105 kubelet[2821]: W0213 15:32:46.601358 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.602105 kubelet[2821]: E0213 15:32:46.601369 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.602105 kubelet[2821]: E0213 15:32:46.601596 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.602105 kubelet[2821]: W0213 15:32:46.601605 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.602105 kubelet[2821]: E0213 15:32:46.601620 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.610835 kubelet[2821]: E0213 15:32:46.610489 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.610835 kubelet[2821]: W0213 15:32:46.610526 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.610835 kubelet[2821]: E0213 15:32:46.610565 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.611336 kubelet[2821]: E0213 15:32:46.610977 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.611336 kubelet[2821]: W0213 15:32:46.610996 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.611336 kubelet[2821]: E0213 15:32:46.611027 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.611661 kubelet[2821]: E0213 15:32:46.611332 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.611661 kubelet[2821]: W0213 15:32:46.611383 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.611661 kubelet[2821]: E0213 15:32:46.611448 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.611753 kubelet[2821]: E0213 15:32:46.611696 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.611753 kubelet[2821]: W0213 15:32:46.611707 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.611753 kubelet[2821]: E0213 15:32:46.611723 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.611924 kubelet[2821]: E0213 15:32:46.611867 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.611924 kubelet[2821]: W0213 15:32:46.611887 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.611924 kubelet[2821]: E0213 15:32:46.611900 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.612289 kubelet[2821]: E0213 15:32:46.612270 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.612289 kubelet[2821]: W0213 15:32:46.612289 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.612378 kubelet[2821]: E0213 15:32:46.612319 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.612808 kubelet[2821]: E0213 15:32:46.612772 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.612808 kubelet[2821]: W0213 15:32:46.612791 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.612876 kubelet[2821]: E0213 15:32:46.612812 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.613117 kubelet[2821]: E0213 15:32:46.613102 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.613117 kubelet[2821]: W0213 15:32:46.613116 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.613283 kubelet[2821]: E0213 15:32:46.613239 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.613411 kubelet[2821]: E0213 15:32:46.613394 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.613411 kubelet[2821]: W0213 15:32:46.613411 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.613535 kubelet[2821]: E0213 15:32:46.613521 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.613691 kubelet[2821]: E0213 15:32:46.613677 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.613728 kubelet[2821]: W0213 15:32:46.613692 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.613728 kubelet[2821]: E0213 15:32:46.613711 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.613927 kubelet[2821]: E0213 15:32:46.613915 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.613966 kubelet[2821]: W0213 15:32:46.613928 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.613966 kubelet[2821]: E0213 15:32:46.613957 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.614185 kubelet[2821]: E0213 15:32:46.614172 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.614185 kubelet[2821]: W0213 15:32:46.614184 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.614258 kubelet[2821]: E0213 15:32:46.614209 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.614564 kubelet[2821]: E0213 15:32:46.614524 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.614564 kubelet[2821]: W0213 15:32:46.614541 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.614641 kubelet[2821]: E0213 15:32:46.614593 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.615240 kubelet[2821]: E0213 15:32:46.615210 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.615240 kubelet[2821]: W0213 15:32:46.615229 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.615337 kubelet[2821]: E0213 15:32:46.615320 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.615576 kubelet[2821]: E0213 15:32:46.615560 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.615576 kubelet[2821]: W0213 15:32:46.615575 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.615655 kubelet[2821]: E0213 15:32:46.615588 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.616065 kubelet[2821]: E0213 15:32:46.615725 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.616065 kubelet[2821]: W0213 15:32:46.615733 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.616065 kubelet[2821]: E0213 15:32:46.615744 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.616065 kubelet[2821]: E0213 15:32:46.615888 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.616065 kubelet[2821]: W0213 15:32:46.615895 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.616065 kubelet[2821]: E0213 15:32:46.615905 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:46.616250 kubelet[2821]: E0213 15:32:46.616225 2821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:32:46.616250 kubelet[2821]: W0213 15:32:46.616234 2821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:32:46.616250 kubelet[2821]: E0213 15:32:46.616245 2821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:32:47.345035 containerd[1492]: time="2025-02-13T15:32:47.344973952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:47.346373 containerd[1492]: time="2025-02-13T15:32:47.346142121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 15:32:47.347274 containerd[1492]: time="2025-02-13T15:32:47.347218370Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:47.350475 containerd[1492]: time="2025-02-13T15:32:47.350408955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:47.351712 containerd[1492]: time="2025-02-13T15:32:47.351113560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 2.073706573s" Feb 13 15:32:47.351712 containerd[1492]: time="2025-02-13T15:32:47.351158001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 15:32:47.353579 containerd[1492]: time="2025-02-13T15:32:47.353329258Z" level=info msg="CreateContainer within sandbox \"7d0a071b852ecd66d2bd48d6e818ef317a4111545642221ea46ef2178e55756b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:32:47.375931 containerd[1492]: time="2025-02-13T15:32:47.375884317Z" level=info msg="CreateContainer within sandbox \"7d0a071b852ecd66d2bd48d6e818ef317a4111545642221ea46ef2178e55756b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b21c423a17517d383f64f264d67d777a5ba65c43f8fb6c22f24a83205144fc19\"" Feb 13 15:32:47.377630 containerd[1492]: time="2025-02-13T15:32:47.377576931Z" level=info msg="StartContainer for \"b21c423a17517d383f64f264d67d777a5ba65c43f8fb6c22f24a83205144fc19\"" Feb 13 15:32:47.420628 systemd[1]: Started cri-containerd-b21c423a17517d383f64f264d67d777a5ba65c43f8fb6c22f24a83205144fc19.scope - libcontainer container b21c423a17517d383f64f264d67d777a5ba65c43f8fb6c22f24a83205144fc19. Feb 13 15:32:47.464671 containerd[1492]: time="2025-02-13T15:32:47.464619742Z" level=info msg="StartContainer for \"b21c423a17517d383f64f264d67d777a5ba65c43f8fb6c22f24a83205144fc19\" returns successfully" Feb 13 15:32:47.486077 systemd[1]: cri-containerd-b21c423a17517d383f64f264d67d777a5ba65c43f8fb6c22f24a83205144fc19.scope: Deactivated successfully. Feb 13 15:32:47.512790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b21c423a17517d383f64f264d67d777a5ba65c43f8fb6c22f24a83205144fc19-rootfs.mount: Deactivated successfully. Feb 13 15:32:47.591983 kubelet[2821]: I0213 15:32:47.590222 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-f8d6b77cb-cg6d4" podStartSLOduration=3.323702955 podStartE2EDuration="5.590163619s" podCreationTimestamp="2025-02-13 15:32:42 +0000 UTC" firstStartedPulling="2025-02-13 15:32:43.010581361 +0000 UTC m=+23.743497132" lastFinishedPulling="2025-02-13 15:32:45.277042025 +0000 UTC m=+26.009957796" observedRunningTime="2025-02-13 15:32:45.576190175 +0000 UTC m=+26.309105946" watchObservedRunningTime="2025-02-13 15:32:47.590163619 +0000 UTC m=+28.323079390" Feb 13 15:32:47.622255 containerd[1492]: time="2025-02-13T15:32:47.621555269Z" level=info msg="shim disconnected" id=b21c423a17517d383f64f264d67d777a5ba65c43f8fb6c22f24a83205144fc19 namespace=k8s.io Feb 13 15:32:47.622255 containerd[1492]: time="2025-02-13T15:32:47.621627909Z" level=warning msg="cleaning up after shim disconnected" id=b21c423a17517d383f64f264d67d777a5ba65c43f8fb6c22f24a83205144fc19 namespace=k8s.io Feb 13 15:32:47.622255 containerd[1492]: time="2025-02-13T15:32:47.621636029Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:32:48.422238 kubelet[2821]: E0213 15:32:48.422196 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:48.576166 containerd[1492]: time="2025-02-13T15:32:48.575910959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:32:50.423247 kubelet[2821]: E0213 15:32:50.423196 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:51.541431 containerd[1492]: time="2025-02-13T15:32:51.540570588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:51.542828 containerd[1492]: time="2025-02-13T15:32:51.542296321Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:51.542828 containerd[1492]: time="2025-02-13T15:32:51.542387042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 15:32:51.548216 containerd[1492]: time="2025-02-13T15:32:51.548115165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:51.550250 containerd[1492]: time="2025-02-13T15:32:51.549608537Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.973650738s" Feb 13 15:32:51.550522 containerd[1492]: time="2025-02-13T15:32:51.550492024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 15:32:51.556047 containerd[1492]: time="2025-02-13T15:32:51.555511302Z" level=info msg="CreateContainer within sandbox \"7d0a071b852ecd66d2bd48d6e818ef317a4111545642221ea46ef2178e55756b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:32:51.581579 containerd[1492]: time="2025-02-13T15:32:51.581527660Z" level=info msg="CreateContainer within sandbox \"7d0a071b852ecd66d2bd48d6e818ef317a4111545642221ea46ef2178e55756b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"47ae7eb733e8f005cf355a7e9a432c6cfe8e5ae7b248715f7dec279d3e84d1e1\"" Feb 13 15:32:51.582799 containerd[1492]: time="2025-02-13T15:32:51.582586668Z" level=info msg="StartContainer for \"47ae7eb733e8f005cf355a7e9a432c6cfe8e5ae7b248715f7dec279d3e84d1e1\"" Feb 13 15:32:51.617619 systemd[1]: Started cri-containerd-47ae7eb733e8f005cf355a7e9a432c6cfe8e5ae7b248715f7dec279d3e84d1e1.scope - libcontainer container 47ae7eb733e8f005cf355a7e9a432c6cfe8e5ae7b248715f7dec279d3e84d1e1. Feb 13 15:32:51.657724 containerd[1492]: time="2025-02-13T15:32:51.657312397Z" level=info msg="StartContainer for \"47ae7eb733e8f005cf355a7e9a432c6cfe8e5ae7b248715f7dec279d3e84d1e1\" returns successfully" Feb 13 15:32:52.164239 containerd[1492]: time="2025-02-13T15:32:52.164165482Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:32:52.168582 systemd[1]: cri-containerd-47ae7eb733e8f005cf355a7e9a432c6cfe8e5ae7b248715f7dec279d3e84d1e1.scope: Deactivated successfully. Feb 13 15:32:52.197210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47ae7eb733e8f005cf355a7e9a432c6cfe8e5ae7b248715f7dec279d3e84d1e1-rootfs.mount: Deactivated successfully. Feb 13 15:32:52.209576 kubelet[2821]: I0213 15:32:52.209505 2821 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:32:52.251808 kubelet[2821]: I0213 15:32:52.251753 2821 topology_manager.go:215] "Topology Admit Handler" podUID="13d5c8c3-cbc0-413c-8112-1a04a642e871" podNamespace="kube-system" podName="coredns-76f75df574-84sk8" Feb 13 15:32:52.254912 kubelet[2821]: I0213 15:32:52.254869 2821 topology_manager.go:215] "Topology Admit Handler" podUID="79306bcb-17fd-459b-b782-0d95273cdb59" podNamespace="kube-system" podName="coredns-76f75df574-4x9md" Feb 13 15:32:52.256676 kubelet[2821]: I0213 15:32:52.255614 2821 topology_manager.go:215] "Topology Admit Handler" podUID="945269f2-3dde-4aed-82a0-7f736010a34e" podNamespace="calico-apiserver" podName="calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:52.269834 systemd[1]: Created slice kubepods-burstable-pod13d5c8c3_cbc0_413c_8112_1a04a642e871.slice - libcontainer container kubepods-burstable-pod13d5c8c3_cbc0_413c_8112_1a04a642e871.slice. Feb 13 15:32:52.275653 kubelet[2821]: W0213 15:32:52.275605 2821 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186-1-1-6-ce8ef0549e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-1-6-ce8ef0549e' and this object Feb 13 15:32:52.275653 kubelet[2821]: E0213 15:32:52.275658 2821 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4186-1-1-6-ce8ef0549e" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4186-1-1-6-ce8ef0549e' and this object Feb 13 15:32:52.281803 kubelet[2821]: W0213 15:32:52.281256 2821 reflector.go:539] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186-1-1-6-ce8ef0549e" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4186-1-1-6-ce8ef0549e' and this object Feb 13 15:32:52.281803 kubelet[2821]: E0213 15:32:52.281294 2821 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4186-1-1-6-ce8ef0549e" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4186-1-1-6-ce8ef0549e' and this object Feb 13 15:32:52.281803 kubelet[2821]: W0213 15:32:52.281337 2821 reflector.go:539] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4186-1-1-6-ce8ef0549e" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4186-1-1-6-ce8ef0549e' and this object Feb 13 15:32:52.285009 kubelet[2821]: E0213 15:32:52.283496 2821 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4186-1-1-6-ce8ef0549e" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4186-1-1-6-ce8ef0549e' and this object Feb 13 15:32:52.284550 systemd[1]: Created slice kubepods-burstable-pod79306bcb_17fd_459b_b782_0d95273cdb59.slice - libcontainer container kubepods-burstable-pod79306bcb_17fd_459b_b782_0d95273cdb59.slice. Feb 13 15:32:52.292394 kubelet[2821]: I0213 15:32:52.290344 2821 topology_manager.go:215] "Topology Admit Handler" podUID="312ba18f-ce14-4faf-8d42-7109fe1d16cd" podNamespace="calico-system" podName="calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:52.292394 kubelet[2821]: I0213 15:32:52.290591 2821 topology_manager.go:215] "Topology Admit Handler" podUID="42f57691-268d-46e2-b88f-eb306fac4b02" podNamespace="calico-apiserver" podName="calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:52.305541 systemd[1]: Created slice kubepods-besteffort-pod945269f2_3dde_4aed_82a0_7f736010a34e.slice - libcontainer container kubepods-besteffort-pod945269f2_3dde_4aed_82a0_7f736010a34e.slice. Feb 13 15:32:52.316277 systemd[1]: Created slice kubepods-besteffort-pod42f57691_268d_46e2_b88f_eb306fac4b02.slice - libcontainer container kubepods-besteffort-pod42f57691_268d_46e2_b88f_eb306fac4b02.slice. Feb 13 15:32:52.326305 systemd[1]: Created slice kubepods-besteffort-pod312ba18f_ce14_4faf_8d42_7109fe1d16cd.slice - libcontainer container kubepods-besteffort-pod312ba18f_ce14_4faf_8d42_7109fe1d16cd.slice. Feb 13 15:32:52.329306 containerd[1492]: time="2025-02-13T15:32:52.329233766Z" level=info msg="shim disconnected" id=47ae7eb733e8f005cf355a7e9a432c6cfe8e5ae7b248715f7dec279d3e84d1e1 namespace=k8s.io Feb 13 15:32:52.329306 containerd[1492]: time="2025-02-13T15:32:52.329291926Z" level=warning msg="cleaning up after shim disconnected" id=47ae7eb733e8f005cf355a7e9a432c6cfe8e5ae7b248715f7dec279d3e84d1e1 namespace=k8s.io Feb 13 15:32:52.329306 containerd[1492]: time="2025-02-13T15:32:52.329304126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:32:52.349002 containerd[1492]: time="2025-02-13T15:32:52.348937554Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:32:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:32:52.357887 kubelet[2821]: I0213 15:32:52.356038 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4tjk\" (UniqueName: \"kubernetes.io/projected/945269f2-3dde-4aed-82a0-7f736010a34e-kube-api-access-z4tjk\") pod \"calico-apiserver-78f7c5565-fnfv9\" (UID: \"945269f2-3dde-4aed-82a0-7f736010a34e\") " pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:52.357887 kubelet[2821]: I0213 15:32:52.356116 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13d5c8c3-cbc0-413c-8112-1a04a642e871-config-volume\") pod \"coredns-76f75df574-84sk8\" (UID: \"13d5c8c3-cbc0-413c-8112-1a04a642e871\") " pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:52.357887 kubelet[2821]: I0213 15:32:52.356155 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/945269f2-3dde-4aed-82a0-7f736010a34e-calico-apiserver-certs\") pod \"calico-apiserver-78f7c5565-fnfv9\" (UID: \"945269f2-3dde-4aed-82a0-7f736010a34e\") " pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:52.357887 kubelet[2821]: I0213 15:32:52.356198 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jvzj\" (UniqueName: \"kubernetes.io/projected/312ba18f-ce14-4faf-8d42-7109fe1d16cd-kube-api-access-6jvzj\") pod \"calico-kube-controllers-7b47d5c589-xd5hs\" (UID: \"312ba18f-ce14-4faf-8d42-7109fe1d16cd\") " pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:52.357887 kubelet[2821]: I0213 15:32:52.356242 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfxkt\" (UniqueName: \"kubernetes.io/projected/13d5c8c3-cbc0-413c-8112-1a04a642e871-kube-api-access-dfxkt\") pod \"coredns-76f75df574-84sk8\" (UID: \"13d5c8c3-cbc0-413c-8112-1a04a642e871\") " pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:52.358188 kubelet[2821]: I0213 15:32:52.356276 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q5zz\" (UniqueName: \"kubernetes.io/projected/42f57691-268d-46e2-b88f-eb306fac4b02-kube-api-access-6q5zz\") pod \"calico-apiserver-78f7c5565-5c4qp\" (UID: \"42f57691-268d-46e2-b88f-eb306fac4b02\") " pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:52.358188 kubelet[2821]: I0213 15:32:52.356431 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79306bcb-17fd-459b-b782-0d95273cdb59-config-volume\") pod \"coredns-76f75df574-4x9md\" (UID: \"79306bcb-17fd-459b-b782-0d95273cdb59\") " pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:52.358188 kubelet[2821]: I0213 15:32:52.356747 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzstc\" (UniqueName: \"kubernetes.io/projected/79306bcb-17fd-459b-b782-0d95273cdb59-kube-api-access-nzstc\") pod \"coredns-76f75df574-4x9md\" (UID: \"79306bcb-17fd-459b-b782-0d95273cdb59\") " pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:52.358188 kubelet[2821]: I0213 15:32:52.356853 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42f57691-268d-46e2-b88f-eb306fac4b02-calico-apiserver-certs\") pod \"calico-apiserver-78f7c5565-5c4qp\" (UID: \"42f57691-268d-46e2-b88f-eb306fac4b02\") " pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:52.358188 kubelet[2821]: I0213 15:32:52.357027 2821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/312ba18f-ce14-4faf-8d42-7109fe1d16cd-tigera-ca-bundle\") pod \"calico-kube-controllers-7b47d5c589-xd5hs\" (UID: \"312ba18f-ce14-4faf-8d42-7109fe1d16cd\") " pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:52.432562 systemd[1]: Created slice kubepods-besteffort-pod548c309d_1177_42c0_887f_c4ea253c82f9.slice - libcontainer container kubepods-besteffort-pod548c309d_1177_42c0_887f_c4ea253c82f9.slice. Feb 13 15:32:52.436980 containerd[1492]: time="2025-02-13T15:32:52.436601775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:0,}" Feb 13 15:32:52.568828 containerd[1492]: time="2025-02-13T15:32:52.568711570Z" level=error msg="Failed to destroy network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.570879 containerd[1492]: time="2025-02-13T15:32:52.569272254Z" level=error msg="encountered an error cleaning up failed sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.570879 containerd[1492]: time="2025-02-13T15:32:52.569490616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.570944 kubelet[2821]: E0213 15:32:52.569835 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.570944 kubelet[2821]: E0213 15:32:52.569894 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:52.570944 kubelet[2821]: E0213 15:32:52.569917 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:52.571158 kubelet[2821]: E0213 15:32:52.570005 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:52.579998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a-shm.mount: Deactivated successfully. Feb 13 15:32:52.587634 kubelet[2821]: I0213 15:32:52.587003 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a" Feb 13 15:32:52.589186 containerd[1492]: time="2025-02-13T15:32:52.587999476Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\"" Feb 13 15:32:52.589186 containerd[1492]: time="2025-02-13T15:32:52.588193357Z" level=info msg="Ensure that sandbox d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a in task-service has been cleanup successfully" Feb 13 15:32:52.593177 systemd[1]: run-netns-cni\x2dfab95904\x2da126\x2d14d8\x2dc330\x2d1e72b5db7877.mount: Deactivated successfully. Feb 13 15:32:52.595241 containerd[1492]: time="2025-02-13T15:32:52.595099769Z" level=info msg="TearDown network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" successfully" Feb 13 15:32:52.595614 containerd[1492]: time="2025-02-13T15:32:52.595422931Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" returns successfully" Feb 13 15:32:52.597437 containerd[1492]: time="2025-02-13T15:32:52.596934823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:1,}" Feb 13 15:32:52.604531 containerd[1492]: time="2025-02-13T15:32:52.603583753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:32:52.640877 containerd[1492]: time="2025-02-13T15:32:52.640821314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:0,}" Feb 13 15:32:52.719513 containerd[1492]: time="2025-02-13T15:32:52.718442578Z" level=error msg="Failed to destroy network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.719513 containerd[1492]: time="2025-02-13T15:32:52.719108263Z" level=error msg="encountered an error cleaning up failed sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.719513 containerd[1492]: time="2025-02-13T15:32:52.719183264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.720600 kubelet[2821]: E0213 15:32:52.719545 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.720600 kubelet[2821]: E0213 15:32:52.719609 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:52.720600 kubelet[2821]: E0213 15:32:52.719632 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:52.720708 kubelet[2821]: E0213 15:32:52.719703 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:52.733863 containerd[1492]: time="2025-02-13T15:32:52.733813214Z" level=error msg="Failed to destroy network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.734863 containerd[1492]: time="2025-02-13T15:32:52.734742861Z" level=error msg="encountered an error cleaning up failed sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.734863 containerd[1492]: time="2025-02-13T15:32:52.734818382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.735318 kubelet[2821]: E0213 15:32:52.735256 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:52.735318 kubelet[2821]: E0213 15:32:52.735320 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:52.735494 kubelet[2821]: E0213 15:32:52.735342 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:52.735566 kubelet[2821]: E0213 15:32:52.735530 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" podUID="312ba18f-ce14-4faf-8d42-7109fe1d16cd" Feb 13 15:32:53.462135 kubelet[2821]: E0213 15:32:53.461301 2821 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:32:53.462135 kubelet[2821]: E0213 15:32:53.461543 2821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/79306bcb-17fd-459b-b782-0d95273cdb59-config-volume podName:79306bcb-17fd-459b-b782-0d95273cdb59 nodeName:}" failed. No retries permitted until 2025-02-13 15:32:53.961500903 +0000 UTC m=+34.694416714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/79306bcb-17fd-459b-b782-0d95273cdb59-config-volume") pod "coredns-76f75df574-4x9md" (UID: "79306bcb-17fd-459b-b782-0d95273cdb59") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:32:53.464577 kubelet[2821]: E0213 15:32:53.464547 2821 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:32:53.464712 kubelet[2821]: E0213 15:32:53.464642 2821 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/13d5c8c3-cbc0-413c-8112-1a04a642e871-config-volume podName:13d5c8c3-cbc0-413c-8112-1a04a642e871 nodeName:}" failed. No retries permitted until 2025-02-13 15:32:53.964620566 +0000 UTC m=+34.697536337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/13d5c8c3-cbc0-413c-8112-1a04a642e871-config-volume") pod "coredns-76f75df574-84sk8" (UID: "13d5c8c3-cbc0-413c-8112-1a04a642e871") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:32:53.512524 containerd[1492]: time="2025-02-13T15:32:53.511710398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:32:53.523184 containerd[1492]: time="2025-02-13T15:32:53.522801801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:32:53.585898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0-shm.mount: Deactivated successfully. Feb 13 15:32:53.586265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2-shm.mount: Deactivated successfully. Feb 13 15:32:53.602893 kubelet[2821]: I0213 15:32:53.602778 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0" Feb 13 15:32:53.605022 containerd[1492]: time="2025-02-13T15:32:53.604910933Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\"" Feb 13 15:32:53.607081 containerd[1492]: time="2025-02-13T15:32:53.606032902Z" level=info msg="Ensure that sandbox 935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0 in task-service has been cleanup successfully" Feb 13 15:32:53.609126 containerd[1492]: time="2025-02-13T15:32:53.608704201Z" level=info msg="TearDown network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" successfully" Feb 13 15:32:53.609126 containerd[1492]: time="2025-02-13T15:32:53.608736402Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" returns successfully" Feb 13 15:32:53.610172 systemd[1]: run-netns-cni\x2d1cd5a0dd\x2de8ea\x2d262a\x2d695f\x2d77ffc5186525.mount: Deactivated successfully. Feb 13 15:32:53.611849 containerd[1492]: time="2025-02-13T15:32:53.611681544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:1,}" Feb 13 15:32:53.613750 kubelet[2821]: I0213 15:32:53.613679 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2" Feb 13 15:32:53.615733 containerd[1492]: time="2025-02-13T15:32:53.615696454Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\"" Feb 13 15:32:53.616688 containerd[1492]: time="2025-02-13T15:32:53.616454939Z" level=info msg="Ensure that sandbox 7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2 in task-service has been cleanup successfully" Feb 13 15:32:53.618768 containerd[1492]: time="2025-02-13T15:32:53.618535915Z" level=info msg="TearDown network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" successfully" Feb 13 15:32:53.619535 containerd[1492]: time="2025-02-13T15:32:53.619447642Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" returns successfully" Feb 13 15:32:53.619841 systemd[1]: run-netns-cni\x2d94679660\x2dc95c\x2dd80b\x2d302b\x2d998d00d99876.mount: Deactivated successfully. Feb 13 15:32:53.622188 containerd[1492]: time="2025-02-13T15:32:53.622151222Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\"" Feb 13 15:32:53.622755 containerd[1492]: time="2025-02-13T15:32:53.622708906Z" level=info msg="TearDown network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" successfully" Feb 13 15:32:53.622755 containerd[1492]: time="2025-02-13T15:32:53.622728586Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" returns successfully" Feb 13 15:32:53.624171 containerd[1492]: time="2025-02-13T15:32:53.624110916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:2,}" Feb 13 15:32:53.631877 containerd[1492]: time="2025-02-13T15:32:53.631825494Z" level=error msg="Failed to destroy network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.632833 containerd[1492]: time="2025-02-13T15:32:53.632213497Z" level=error msg="encountered an error cleaning up failed sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.632833 containerd[1492]: time="2025-02-13T15:32:53.632291777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.632999 kubelet[2821]: E0213 15:32:53.632653 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.632999 kubelet[2821]: E0213 15:32:53.632708 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:53.632999 kubelet[2821]: E0213 15:32:53.632728 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:53.633094 kubelet[2821]: E0213 15:32:53.632784 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78f7c5565-fnfv9_calico-apiserver(945269f2-3dde-4aed-82a0-7f736010a34e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78f7c5565-fnfv9_calico-apiserver(945269f2-3dde-4aed-82a0-7f736010a34e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" podUID="945269f2-3dde-4aed-82a0-7f736010a34e" Feb 13 15:32:53.635972 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af-shm.mount: Deactivated successfully. Feb 13 15:32:53.683890 containerd[1492]: time="2025-02-13T15:32:53.683833202Z" level=error msg="Failed to destroy network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.684981 containerd[1492]: time="2025-02-13T15:32:53.684914610Z" level=error msg="encountered an error cleaning up failed sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.685141 containerd[1492]: time="2025-02-13T15:32:53.685004411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.685579 kubelet[2821]: E0213 15:32:53.685248 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.685658 kubelet[2821]: E0213 15:32:53.685625 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:53.685690 kubelet[2821]: E0213 15:32:53.685667 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:53.685770 kubelet[2821]: E0213 15:32:53.685727 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78f7c5565-5c4qp_calico-apiserver(42f57691-268d-46e2-b88f-eb306fac4b02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78f7c5565-5c4qp_calico-apiserver(42f57691-268d-46e2-b88f-eb306fac4b02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" podUID="42f57691-268d-46e2-b88f-eb306fac4b02" Feb 13 15:32:53.726292 containerd[1492]: time="2025-02-13T15:32:53.726174278Z" level=error msg="Failed to destroy network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.729548 containerd[1492]: time="2025-02-13T15:32:53.729499023Z" level=error msg="encountered an error cleaning up failed sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.729863 containerd[1492]: time="2025-02-13T15:32:53.729645144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.730407 kubelet[2821]: E0213 15:32:53.729925 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.730407 kubelet[2821]: E0213 15:32:53.729991 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:53.730407 kubelet[2821]: E0213 15:32:53.730014 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:53.730596 kubelet[2821]: E0213 15:32:53.730069 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" podUID="312ba18f-ce14-4faf-8d42-7109fe1d16cd" Feb 13 15:32:53.737065 containerd[1492]: time="2025-02-13T15:32:53.737002599Z" level=error msg="Failed to destroy network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.737599 containerd[1492]: time="2025-02-13T15:32:53.737555003Z" level=error msg="encountered an error cleaning up failed sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.737714 containerd[1492]: time="2025-02-13T15:32:53.737662844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.738397 kubelet[2821]: E0213 15:32:53.738049 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:53.738397 kubelet[2821]: E0213 15:32:53.738108 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:53.738397 kubelet[2821]: E0213 15:32:53.738145 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:53.738681 kubelet[2821]: E0213 15:32:53.738215 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:54.082332 containerd[1492]: time="2025-02-13T15:32:54.082222008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:54.097106 containerd[1492]: time="2025-02-13T15:32:54.097053918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:0,}" Feb 13 15:32:54.223406 containerd[1492]: time="2025-02-13T15:32:54.223188210Z" level=error msg="Failed to destroy network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.224197 containerd[1492]: time="2025-02-13T15:32:54.224162217Z" level=error msg="encountered an error cleaning up failed sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.224705 containerd[1492]: time="2025-02-13T15:32:54.224675701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.225270 kubelet[2821]: E0213 15:32:54.225219 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.225388 kubelet[2821]: E0213 15:32:54.225286 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:54.225388 kubelet[2821]: E0213 15:32:54.225308 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:54.225675 kubelet[2821]: E0213 15:32:54.225548 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-84sk8_kube-system(13d5c8c3-cbc0-413c-8112-1a04a642e871)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-84sk8_kube-system(13d5c8c3-cbc0-413c-8112-1a04a642e871)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-84sk8" podUID="13d5c8c3-cbc0-413c-8112-1a04a642e871" Feb 13 15:32:54.254844 containerd[1492]: time="2025-02-13T15:32:54.254667923Z" level=error msg="Failed to destroy network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.255524 containerd[1492]: time="2025-02-13T15:32:54.255262087Z" level=error msg="encountered an error cleaning up failed sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.255524 containerd[1492]: time="2025-02-13T15:32:54.255339048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.255723 kubelet[2821]: E0213 15:32:54.255694 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.255778 kubelet[2821]: E0213 15:32:54.255759 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:54.255805 kubelet[2821]: E0213 15:32:54.255780 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:54.255854 kubelet[2821]: E0213 15:32:54.255838 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4x9md_kube-system(79306bcb-17fd-459b-b782-0d95273cdb59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4x9md_kube-system(79306bcb-17fd-459b-b782-0d95273cdb59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4x9md" podUID="79306bcb-17fd-459b-b782-0d95273cdb59" Feb 13 15:32:54.578624 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e-shm.mount: Deactivated successfully. Feb 13 15:32:54.578725 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba-shm.mount: Deactivated successfully. Feb 13 15:32:54.618585 kubelet[2821]: I0213 15:32:54.618553 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba" Feb 13 15:32:54.621936 containerd[1492]: time="2025-02-13T15:32:54.621837596Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\"" Feb 13 15:32:54.626734 containerd[1492]: time="2025-02-13T15:32:54.622025597Z" level=info msg="Ensure that sandbox 3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba in task-service has been cleanup successfully" Feb 13 15:32:54.626734 containerd[1492]: time="2025-02-13T15:32:54.626454150Z" level=info msg="TearDown network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" successfully" Feb 13 15:32:54.626734 containerd[1492]: time="2025-02-13T15:32:54.626508270Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" returns successfully" Feb 13 15:32:54.628204 containerd[1492]: time="2025-02-13T15:32:54.627941481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:32:54.628715 systemd[1]: run-netns-cni\x2dc107bad2\x2d1ee9\x2d49ae\x2d4daf\x2d0641319fbf1e.mount: Deactivated successfully. Feb 13 15:32:54.635392 kubelet[2821]: I0213 15:32:54.633609 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139" Feb 13 15:32:54.636151 containerd[1492]: time="2025-02-13T15:32:54.636116661Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\"" Feb 13 15:32:54.638085 containerd[1492]: time="2025-02-13T15:32:54.637840994Z" level=info msg="Ensure that sandbox c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139 in task-service has been cleanup successfully" Feb 13 15:32:54.638280 containerd[1492]: time="2025-02-13T15:32:54.638255477Z" level=info msg="TearDown network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" successfully" Feb 13 15:32:54.638611 containerd[1492]: time="2025-02-13T15:32:54.638342998Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" returns successfully" Feb 13 15:32:54.644367 containerd[1492]: time="2025-02-13T15:32:54.644295402Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\"" Feb 13 15:32:54.645099 systemd[1]: run-netns-cni\x2d8a9a12dc\x2dc9f6\x2d9f46\x2d7003\x2dcd70617f6353.mount: Deactivated successfully. Feb 13 15:32:54.645640 containerd[1492]: time="2025-02-13T15:32:54.645279089Z" level=info msg="TearDown network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" successfully" Feb 13 15:32:54.645771 containerd[1492]: time="2025-02-13T15:32:54.645336729Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" returns successfully" Feb 13 15:32:54.647884 containerd[1492]: time="2025-02-13T15:32:54.647676147Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\"" Feb 13 15:32:54.647884 containerd[1492]: time="2025-02-13T15:32:54.647797148Z" level=info msg="TearDown network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" successfully" Feb 13 15:32:54.647884 containerd[1492]: time="2025-02-13T15:32:54.647807388Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" returns successfully" Feb 13 15:32:54.649164 kubelet[2821]: I0213 15:32:54.649106 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d" Feb 13 15:32:54.651034 containerd[1492]: time="2025-02-13T15:32:54.651000691Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\"" Feb 13 15:32:54.651673 containerd[1492]: time="2025-02-13T15:32:54.651407614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:3,}" Feb 13 15:32:54.652191 containerd[1492]: time="2025-02-13T15:32:54.652167540Z" level=info msg="Ensure that sandbox bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d in task-service has been cleanup successfully" Feb 13 15:32:54.652885 containerd[1492]: time="2025-02-13T15:32:54.652862825Z" level=info msg="TearDown network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" successfully" Feb 13 15:32:54.653436 containerd[1492]: time="2025-02-13T15:32:54.653280828Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" returns successfully" Feb 13 15:32:54.658019 containerd[1492]: time="2025-02-13T15:32:54.657975943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:1,}" Feb 13 15:32:54.658154 systemd[1]: run-netns-cni\x2df9c4722a\x2d27f6\x2d483f\x2d5596\x2debe4e3446dea.mount: Deactivated successfully. Feb 13 15:32:54.661014 kubelet[2821]: I0213 15:32:54.660978 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af" Feb 13 15:32:54.672808 containerd[1492]: time="2025-02-13T15:32:54.672767372Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\"" Feb 13 15:32:54.674842 containerd[1492]: time="2025-02-13T15:32:54.674773107Z" level=info msg="Ensure that sandbox 62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af in task-service has been cleanup successfully" Feb 13 15:32:54.679449 containerd[1492]: time="2025-02-13T15:32:54.678953218Z" level=info msg="TearDown network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" successfully" Feb 13 15:32:54.679851 containerd[1492]: time="2025-02-13T15:32:54.679829504Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" returns successfully" Feb 13 15:32:54.680841 kubelet[2821]: I0213 15:32:54.680816 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943" Feb 13 15:32:54.682241 containerd[1492]: time="2025-02-13T15:32:54.681546877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:32:54.684862 containerd[1492]: time="2025-02-13T15:32:54.684827581Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\"" Feb 13 15:32:54.685824 containerd[1492]: time="2025-02-13T15:32:54.685431706Z" level=info msg="Ensure that sandbox db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943 in task-service has been cleanup successfully" Feb 13 15:32:54.691265 containerd[1492]: time="2025-02-13T15:32:54.691230628Z" level=info msg="TearDown network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" successfully" Feb 13 15:32:54.691411 kubelet[2821]: I0213 15:32:54.691329 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e" Feb 13 15:32:54.692140 containerd[1492]: time="2025-02-13T15:32:54.691755992Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" returns successfully" Feb 13 15:32:54.695203 containerd[1492]: time="2025-02-13T15:32:54.695156097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:1,}" Feb 13 15:32:54.695741 containerd[1492]: time="2025-02-13T15:32:54.695623101Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\"" Feb 13 15:32:54.696233 containerd[1492]: time="2025-02-13T15:32:54.696211265Z" level=info msg="Ensure that sandbox 1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e in task-service has been cleanup successfully" Feb 13 15:32:54.698046 containerd[1492]: time="2025-02-13T15:32:54.697996718Z" level=info msg="TearDown network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" successfully" Feb 13 15:32:54.698638 containerd[1492]: time="2025-02-13T15:32:54.698360641Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" returns successfully" Feb 13 15:32:54.700104 containerd[1492]: time="2025-02-13T15:32:54.699985253Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\"" Feb 13 15:32:54.700805 containerd[1492]: time="2025-02-13T15:32:54.700779299Z" level=info msg="TearDown network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" successfully" Feb 13 15:32:54.700912 containerd[1492]: time="2025-02-13T15:32:54.700897220Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" returns successfully" Feb 13 15:32:54.701636 containerd[1492]: time="2025-02-13T15:32:54.701591745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:2,}" Feb 13 15:32:54.794291 containerd[1492]: time="2025-02-13T15:32:54.794159229Z" level=error msg="Failed to destroy network for sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.799365 containerd[1492]: time="2025-02-13T15:32:54.799133746Z" level=error msg="encountered an error cleaning up failed sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.799365 containerd[1492]: time="2025-02-13T15:32:54.799235547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.799606 kubelet[2821]: E0213 15:32:54.799556 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.799655 kubelet[2821]: E0213 15:32:54.799623 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:54.799655 kubelet[2821]: E0213 15:32:54.799644 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:54.799714 kubelet[2821]: E0213 15:32:54.799694 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78f7c5565-5c4qp_calico-apiserver(42f57691-268d-46e2-b88f-eb306fac4b02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78f7c5565-5c4qp_calico-apiserver(42f57691-268d-46e2-b88f-eb306fac4b02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" podUID="42f57691-268d-46e2-b88f-eb306fac4b02" Feb 13 15:32:54.882742 containerd[1492]: time="2025-02-13T15:32:54.881752276Z" level=error msg="Failed to destroy network for sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.886554 containerd[1492]: time="2025-02-13T15:32:54.886256390Z" level=error msg="encountered an error cleaning up failed sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.886988 containerd[1492]: time="2025-02-13T15:32:54.886893474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.888652 kubelet[2821]: E0213 15:32:54.888586 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.888652 kubelet[2821]: E0213 15:32:54.888657 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:54.888956 kubelet[2821]: E0213 15:32:54.888679 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:54.888956 kubelet[2821]: E0213 15:32:54.888732 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:54.946884 containerd[1492]: time="2025-02-13T15:32:54.946834277Z" level=error msg="Failed to destroy network for sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.948933 containerd[1492]: time="2025-02-13T15:32:54.948815772Z" level=error msg="encountered an error cleaning up failed sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.948933 containerd[1492]: time="2025-02-13T15:32:54.948901932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.949420 kubelet[2821]: E0213 15:32:54.949379 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.949613 kubelet[2821]: E0213 15:32:54.949441 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:54.949613 kubelet[2821]: E0213 15:32:54.949463 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:54.949613 kubelet[2821]: E0213 15:32:54.949544 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" podUID="312ba18f-ce14-4faf-8d42-7109fe1d16cd" Feb 13 15:32:54.963426 containerd[1492]: time="2025-02-13T15:32:54.963260919Z" level=error msg="Failed to destroy network for sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.965358 containerd[1492]: time="2025-02-13T15:32:54.965264813Z" level=error msg="encountered an error cleaning up failed sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.965490 containerd[1492]: time="2025-02-13T15:32:54.965399294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.965964 kubelet[2821]: E0213 15:32:54.965737 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.965964 kubelet[2821]: E0213 15:32:54.965792 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:54.965964 kubelet[2821]: E0213 15:32:54.965825 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:54.966110 kubelet[2821]: E0213 15:32:54.965879 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-84sk8_kube-system(13d5c8c3-cbc0-413c-8112-1a04a642e871)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-84sk8_kube-system(13d5c8c3-cbc0-413c-8112-1a04a642e871)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-84sk8" podUID="13d5c8c3-cbc0-413c-8112-1a04a642e871" Feb 13 15:32:54.977778 containerd[1492]: time="2025-02-13T15:32:54.977727745Z" level=error msg="Failed to destroy network for sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.979363 containerd[1492]: time="2025-02-13T15:32:54.979267437Z" level=error msg="encountered an error cleaning up failed sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.979885 containerd[1492]: time="2025-02-13T15:32:54.979636640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.980085 kubelet[2821]: E0213 15:32:54.979984 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.980085 kubelet[2821]: E0213 15:32:54.980063 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:54.980555 kubelet[2821]: E0213 15:32:54.980091 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:54.980555 kubelet[2821]: E0213 15:32:54.980152 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78f7c5565-fnfv9_calico-apiserver(945269f2-3dde-4aed-82a0-7f736010a34e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78f7c5565-fnfv9_calico-apiserver(945269f2-3dde-4aed-82a0-7f736010a34e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" podUID="945269f2-3dde-4aed-82a0-7f736010a34e" Feb 13 15:32:54.997258 containerd[1492]: time="2025-02-13T15:32:54.997117689Z" level=error msg="Failed to destroy network for sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.997902 containerd[1492]: time="2025-02-13T15:32:54.997744093Z" level=error msg="encountered an error cleaning up failed sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.997902 containerd[1492]: time="2025-02-13T15:32:54.997839054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.998398 kubelet[2821]: E0213 15:32:54.998108 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:54.998398 kubelet[2821]: E0213 15:32:54.998165 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:54.998398 kubelet[2821]: E0213 15:32:54.998185 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:54.998545 kubelet[2821]: E0213 15:32:54.998261 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4x9md_kube-system(79306bcb-17fd-459b-b782-0d95273cdb59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4x9md_kube-system(79306bcb-17fd-459b-b782-0d95273cdb59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4x9md" podUID="79306bcb-17fd-459b-b782-0d95273cdb59" Feb 13 15:32:55.577007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3-shm.mount: Deactivated successfully. Feb 13 15:32:55.577120 systemd[1]: run-netns-cni\x2da70a155f\x2dc19f\x2df092\x2d2b29\x2d23e18e5d52ab.mount: Deactivated successfully. Feb 13 15:32:55.577168 systemd[1]: run-netns-cni\x2de1eae9ce\x2de81c\x2d8fc0\x2d539d\x2d124ec5ab7f4c.mount: Deactivated successfully. Feb 13 15:32:55.577213 systemd[1]: run-netns-cni\x2d6b978155\x2d2291\x2ddb22\x2d28ee\x2d6631ee96f60d.mount: Deactivated successfully. Feb 13 15:32:55.696376 kubelet[2821]: I0213 15:32:55.696151 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866" Feb 13 15:32:55.700695 containerd[1492]: time="2025-02-13T15:32:55.699717992Z" level=info msg="StopPodSandbox for \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\"" Feb 13 15:32:55.700695 containerd[1492]: time="2025-02-13T15:32:55.700173875Z" level=info msg="Ensure that sandbox 417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866 in task-service has been cleanup successfully" Feb 13 15:32:55.703788 containerd[1492]: time="2025-02-13T15:32:55.702798534Z" level=info msg="TearDown network for sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" successfully" Feb 13 15:32:55.703788 containerd[1492]: time="2025-02-13T15:32:55.702832414Z" level=info msg="StopPodSandbox for \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" returns successfully" Feb 13 15:32:55.704156 containerd[1492]: time="2025-02-13T15:32:55.704040143Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\"" Feb 13 15:32:55.704156 containerd[1492]: time="2025-02-13T15:32:55.704148984Z" level=info msg="TearDown network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" successfully" Feb 13 15:32:55.704221 containerd[1492]: time="2025-02-13T15:32:55.704159864Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" returns successfully" Feb 13 15:32:55.705973 systemd[1]: run-netns-cni\x2d64dc86ec\x2d703e\x2d28ce\x2d7cb0\x2d0c31602b2da2.mount: Deactivated successfully. Feb 13 15:32:55.707626 kubelet[2821]: I0213 15:32:55.706754 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813" Feb 13 15:32:55.709966 containerd[1492]: time="2025-02-13T15:32:55.709344382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:2,}" Feb 13 15:32:55.712364 containerd[1492]: time="2025-02-13T15:32:55.711964681Z" level=info msg="StopPodSandbox for \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\"" Feb 13 15:32:55.712364 containerd[1492]: time="2025-02-13T15:32:55.712197603Z" level=info msg="Ensure that sandbox a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813 in task-service has been cleanup successfully" Feb 13 15:32:55.715767 containerd[1492]: time="2025-02-13T15:32:55.714835342Z" level=info msg="TearDown network for sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" successfully" Feb 13 15:32:55.715767 containerd[1492]: time="2025-02-13T15:32:55.714919903Z" level=info msg="StopPodSandbox for \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" returns successfully" Feb 13 15:32:55.716547 systemd[1]: run-netns-cni\x2dcae3ff36\x2d390d\x2d3960\x2d742f\x2dd35b0ddd52c6.mount: Deactivated successfully. Feb 13 15:32:55.719768 containerd[1492]: time="2025-02-13T15:32:55.719445656Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\"" Feb 13 15:32:55.719768 containerd[1492]: time="2025-02-13T15:32:55.719571857Z" level=info msg="TearDown network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" successfully" Feb 13 15:32:55.719768 containerd[1492]: time="2025-02-13T15:32:55.719582257Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" returns successfully" Feb 13 15:32:55.722864 containerd[1492]: time="2025-02-13T15:32:55.722822041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:32:55.723607 kubelet[2821]: I0213 15:32:55.723514 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f" Feb 13 15:32:55.725389 containerd[1492]: time="2025-02-13T15:32:55.725255179Z" level=info msg="StopPodSandbox for \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\"" Feb 13 15:32:55.726449 containerd[1492]: time="2025-02-13T15:32:55.726405707Z" level=info msg="Ensure that sandbox a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f in task-service has been cleanup successfully" Feb 13 15:32:55.727068 containerd[1492]: time="2025-02-13T15:32:55.726781750Z" level=info msg="TearDown network for sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" successfully" Feb 13 15:32:55.727202 containerd[1492]: time="2025-02-13T15:32:55.727181193Z" level=info msg="StopPodSandbox for \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" returns successfully" Feb 13 15:32:55.729176 systemd[1]: run-netns-cni\x2d2499e771\x2df693\x2dffed\x2d594c\x2d44f27b75d6fa.mount: Deactivated successfully. Feb 13 15:32:55.731162 containerd[1492]: time="2025-02-13T15:32:55.730798419Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\"" Feb 13 15:32:55.731162 containerd[1492]: time="2025-02-13T15:32:55.730944540Z" level=info msg="TearDown network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" successfully" Feb 13 15:32:55.731162 containerd[1492]: time="2025-02-13T15:32:55.730955980Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" returns successfully" Feb 13 15:32:55.735881 containerd[1492]: time="2025-02-13T15:32:55.735692855Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\"" Feb 13 15:32:55.736881 containerd[1492]: time="2025-02-13T15:32:55.736666222Z" level=info msg="TearDown network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" successfully" Feb 13 15:32:55.736881 containerd[1492]: time="2025-02-13T15:32:55.736702662Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" returns successfully" Feb 13 15:32:55.741034 kubelet[2821]: I0213 15:32:55.740766 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc" Feb 13 15:32:55.746451 containerd[1492]: time="2025-02-13T15:32:55.745284325Z" level=info msg="StopPodSandbox for \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\"" Feb 13 15:32:55.747017 containerd[1492]: time="2025-02-13T15:32:55.746605055Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\"" Feb 13 15:32:55.747017 containerd[1492]: time="2025-02-13T15:32:55.746721616Z" level=info msg="TearDown network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" successfully" Feb 13 15:32:55.747017 containerd[1492]: time="2025-02-13T15:32:55.746731536Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" returns successfully" Feb 13 15:32:55.747646 containerd[1492]: time="2025-02-13T15:32:55.746809856Z" level=info msg="Ensure that sandbox ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc in task-service has been cleanup successfully" Feb 13 15:32:55.749006 containerd[1492]: time="2025-02-13T15:32:55.748962912Z" level=info msg="TearDown network for sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" successfully" Feb 13 15:32:55.749133 containerd[1492]: time="2025-02-13T15:32:55.749117153Z" level=info msg="StopPodSandbox for \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" returns successfully" Feb 13 15:32:55.749895 containerd[1492]: time="2025-02-13T15:32:55.749865599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:4,}" Feb 13 15:32:55.752014 systemd[1]: run-netns-cni\x2d7944f3b0\x2d6cde\x2dbad6\x2ddcce\x2d99b1c0311e0b.mount: Deactivated successfully. Feb 13 15:32:55.756275 containerd[1492]: time="2025-02-13T15:32:55.756135525Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\"" Feb 13 15:32:55.757244 containerd[1492]: time="2025-02-13T15:32:55.757115332Z" level=info msg="TearDown network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" successfully" Feb 13 15:32:55.757244 containerd[1492]: time="2025-02-13T15:32:55.757143772Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" returns successfully" Feb 13 15:32:55.758735 containerd[1492]: time="2025-02-13T15:32:55.758445661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:2,}" Feb 13 15:32:55.759832 kubelet[2821]: I0213 15:32:55.759803 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c" Feb 13 15:32:55.763663 containerd[1492]: time="2025-02-13T15:32:55.762492171Z" level=info msg="StopPodSandbox for \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\"" Feb 13 15:32:55.763663 containerd[1492]: time="2025-02-13T15:32:55.763089215Z" level=info msg="Ensure that sandbox 03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c in task-service has been cleanup successfully" Feb 13 15:32:55.764379 containerd[1492]: time="2025-02-13T15:32:55.764158023Z" level=info msg="TearDown network for sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" successfully" Feb 13 15:32:55.764379 containerd[1492]: time="2025-02-13T15:32:55.764182303Z" level=info msg="StopPodSandbox for \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" returns successfully" Feb 13 15:32:55.767125 containerd[1492]: time="2025-02-13T15:32:55.766839203Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\"" Feb 13 15:32:55.767125 containerd[1492]: time="2025-02-13T15:32:55.766966324Z" level=info msg="TearDown network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" successfully" Feb 13 15:32:55.767125 containerd[1492]: time="2025-02-13T15:32:55.766976324Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" returns successfully" Feb 13 15:32:55.769426 containerd[1492]: time="2025-02-13T15:32:55.769284901Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\"" Feb 13 15:32:55.769975 containerd[1492]: time="2025-02-13T15:32:55.769630183Z" level=info msg="TearDown network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" successfully" Feb 13 15:32:55.769975 containerd[1492]: time="2025-02-13T15:32:55.769651904Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" returns successfully" Feb 13 15:32:55.770092 kubelet[2821]: I0213 15:32:55.769779 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3" Feb 13 15:32:55.771066 containerd[1492]: time="2025-02-13T15:32:55.770731231Z" level=info msg="StopPodSandbox for \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\"" Feb 13 15:32:55.772802 containerd[1492]: time="2025-02-13T15:32:55.772677966Z" level=info msg="Ensure that sandbox 708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3 in task-service has been cleanup successfully" Feb 13 15:32:55.773561 containerd[1492]: time="2025-02-13T15:32:55.773419971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:3,}" Feb 13 15:32:55.775365 containerd[1492]: time="2025-02-13T15:32:55.775030303Z" level=info msg="TearDown network for sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" successfully" Feb 13 15:32:55.775365 containerd[1492]: time="2025-02-13T15:32:55.775066863Z" level=info msg="StopPodSandbox for \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" returns successfully" Feb 13 15:32:55.775958 containerd[1492]: time="2025-02-13T15:32:55.775843669Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\"" Feb 13 15:32:55.776731 containerd[1492]: time="2025-02-13T15:32:55.776584714Z" level=info msg="TearDown network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" successfully" Feb 13 15:32:55.776731 containerd[1492]: time="2025-02-13T15:32:55.776625075Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" returns successfully" Feb 13 15:32:55.779235 containerd[1492]: time="2025-02-13T15:32:55.779184253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:32:56.000650 containerd[1492]: time="2025-02-13T15:32:56.000588674Z" level=error msg="Failed to destroy network for sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.002218 containerd[1492]: time="2025-02-13T15:32:56.002158845Z" level=error msg="encountered an error cleaning up failed sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.003303 containerd[1492]: time="2025-02-13T15:32:56.003167773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.003688 kubelet[2821]: E0213 15:32:56.003561 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.003757 kubelet[2821]: E0213 15:32:56.003724 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:56.003899 kubelet[2821]: E0213 15:32:56.003749 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:56.003962 kubelet[2821]: E0213 15:32:56.003952 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78f7c5565-fnfv9_calico-apiserver(945269f2-3dde-4aed-82a0-7f736010a34e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78f7c5565-fnfv9_calico-apiserver(945269f2-3dde-4aed-82a0-7f736010a34e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" podUID="945269f2-3dde-4aed-82a0-7f736010a34e" Feb 13 15:32:56.011955 containerd[1492]: time="2025-02-13T15:32:56.011860156Z" level=error msg="Failed to destroy network for sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.013914 containerd[1492]: time="2025-02-13T15:32:56.013866930Z" level=error msg="encountered an error cleaning up failed sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.015134 containerd[1492]: time="2025-02-13T15:32:56.015091859Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.016757 kubelet[2821]: E0213 15:32:56.016725 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.016995 kubelet[2821]: E0213 15:32:56.016782 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:56.016995 kubelet[2821]: E0213 15:32:56.016802 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:56.016995 kubelet[2821]: E0213 15:32:56.016860 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-84sk8_kube-system(13d5c8c3-cbc0-413c-8112-1a04a642e871)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-84sk8_kube-system(13d5c8c3-cbc0-413c-8112-1a04a642e871)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-84sk8" podUID="13d5c8c3-cbc0-413c-8112-1a04a642e871" Feb 13 15:32:56.056569 containerd[1492]: time="2025-02-13T15:32:56.056344878Z" level=error msg="Failed to destroy network for sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.057120 containerd[1492]: time="2025-02-13T15:32:56.056942403Z" level=error msg="encountered an error cleaning up failed sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.057120 containerd[1492]: time="2025-02-13T15:32:56.057011643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.057416 kubelet[2821]: E0213 15:32:56.057279 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.057416 kubelet[2821]: E0213 15:32:56.057370 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:56.057416 kubelet[2821]: E0213 15:32:56.057396 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:56.057787 kubelet[2821]: E0213 15:32:56.057448 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:56.064819 containerd[1492]: time="2025-02-13T15:32:56.064665379Z" level=error msg="Failed to destroy network for sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.065415 containerd[1492]: time="2025-02-13T15:32:56.065247263Z" level=error msg="encountered an error cleaning up failed sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.065415 containerd[1492]: time="2025-02-13T15:32:56.065319743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.067651 kubelet[2821]: E0213 15:32:56.067579 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.067651 kubelet[2821]: E0213 15:32:56.067643 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:56.067886 kubelet[2821]: E0213 15:32:56.067664 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:56.069305 kubelet[2821]: E0213 15:32:56.067872 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78f7c5565-5c4qp_calico-apiserver(42f57691-268d-46e2-b88f-eb306fac4b02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78f7c5565-5c4qp_calico-apiserver(42f57691-268d-46e2-b88f-eb306fac4b02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" podUID="42f57691-268d-46e2-b88f-eb306fac4b02" Feb 13 15:32:56.080081 containerd[1492]: time="2025-02-13T15:32:56.079948089Z" level=error msg="Failed to destroy network for sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.080715 containerd[1492]: time="2025-02-13T15:32:56.080667095Z" level=error msg="encountered an error cleaning up failed sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.080953 containerd[1492]: time="2025-02-13T15:32:56.080919056Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.081316 kubelet[2821]: E0213 15:32:56.081293 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.081429 kubelet[2821]: E0213 15:32:56.081344 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:56.081429 kubelet[2821]: E0213 15:32:56.081381 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:56.081554 kubelet[2821]: E0213 15:32:56.081438 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" podUID="312ba18f-ce14-4faf-8d42-7109fe1d16cd" Feb 13 15:32:56.090726 containerd[1492]: time="2025-02-13T15:32:56.090592447Z" level=error msg="Failed to destroy network for sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.091210 containerd[1492]: time="2025-02-13T15:32:56.091037090Z" level=error msg="encountered an error cleaning up failed sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.091210 containerd[1492]: time="2025-02-13T15:32:56.091110650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.091507 kubelet[2821]: E0213 15:32:56.091409 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:56.091507 kubelet[2821]: E0213 15:32:56.091497 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:56.091611 kubelet[2821]: E0213 15:32:56.091533 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:56.091611 kubelet[2821]: E0213 15:32:56.091602 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4x9md_kube-system(79306bcb-17fd-459b-b782-0d95273cdb59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4x9md_kube-system(79306bcb-17fd-459b-b782-0d95273cdb59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4x9md" podUID="79306bcb-17fd-459b-b782-0d95273cdb59" Feb 13 15:32:56.577872 systemd[1]: run-netns-cni\x2db639264d\x2d5458\x2df50d\x2d8037\x2d5c20b87b3652.mount: Deactivated successfully. Feb 13 15:32:56.577982 systemd[1]: run-netns-cni\x2dec1d3d93\x2dde32\x2dfa5f\x2da9fd\x2d87325cf572e5.mount: Deactivated successfully. Feb 13 15:32:56.781424 kubelet[2821]: I0213 15:32:56.781309 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2" Feb 13 15:32:56.786102 containerd[1492]: time="2025-02-13T15:32:56.784937322Z" level=info msg="StopPodSandbox for \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\"" Feb 13 15:32:56.786102 containerd[1492]: time="2025-02-13T15:32:56.785175244Z" level=info msg="Ensure that sandbox e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2 in task-service has been cleanup successfully" Feb 13 15:32:56.789463 systemd[1]: run-netns-cni\x2d682a73c8\x2defd9\x2d641b\x2d8f22\x2d527e239a0329.mount: Deactivated successfully. Feb 13 15:32:56.794012 containerd[1492]: time="2025-02-13T15:32:56.793113781Z" level=info msg="TearDown network for sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\" successfully" Feb 13 15:32:56.794012 containerd[1492]: time="2025-02-13T15:32:56.793157662Z" level=info msg="StopPodSandbox for \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\" returns successfully" Feb 13 15:32:56.797123 kubelet[2821]: I0213 15:32:56.797008 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72" Feb 13 15:32:56.801143 containerd[1492]: time="2025-02-13T15:32:56.800443915Z" level=info msg="StopPodSandbox for \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\"" Feb 13 15:32:56.804534 containerd[1492]: time="2025-02-13T15:32:56.801265960Z" level=info msg="TearDown network for sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" successfully" Feb 13 15:32:56.804534 containerd[1492]: time="2025-02-13T15:32:56.801298921Z" level=info msg="StopPodSandbox for \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" returns successfully" Feb 13 15:32:56.804534 containerd[1492]: time="2025-02-13T15:32:56.801663203Z" level=info msg="StopPodSandbox for \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\"" Feb 13 15:32:56.804534 containerd[1492]: time="2025-02-13T15:32:56.801775564Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\"" Feb 13 15:32:56.804534 containerd[1492]: time="2025-02-13T15:32:56.801840525Z" level=info msg="Ensure that sandbox e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72 in task-service has been cleanup successfully" Feb 13 15:32:56.804534 containerd[1492]: time="2025-02-13T15:32:56.801866525Z" level=info msg="TearDown network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" successfully" Feb 13 15:32:56.804534 containerd[1492]: time="2025-02-13T15:32:56.801877285Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" returns successfully" Feb 13 15:32:56.804337 systemd[1]: run-netns-cni\x2d15693be5\x2d3a26\x2d748d\x2ddcd5\x2dfc0ef39defcf.mount: Deactivated successfully. Feb 13 15:32:56.805402 containerd[1492]: time="2025-02-13T15:32:56.805372310Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\"" Feb 13 15:32:56.805817 containerd[1492]: time="2025-02-13T15:32:56.805796313Z" level=info msg="TearDown network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" successfully" Feb 13 15:32:56.805960 containerd[1492]: time="2025-02-13T15:32:56.805885114Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" returns successfully" Feb 13 15:32:56.806768 containerd[1492]: time="2025-02-13T15:32:56.806260917Z" level=info msg="TearDown network for sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\" successfully" Feb 13 15:32:56.808153 containerd[1492]: time="2025-02-13T15:32:56.807771888Z" level=info msg="StopPodSandbox for \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\" returns successfully" Feb 13 15:32:56.809029 containerd[1492]: time="2025-02-13T15:32:56.808777455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:4,}" Feb 13 15:32:56.809158 containerd[1492]: time="2025-02-13T15:32:56.809138978Z" level=info msg="StopPodSandbox for \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\"" Feb 13 15:32:56.810737 containerd[1492]: time="2025-02-13T15:32:56.810697309Z" level=info msg="TearDown network for sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" successfully" Feb 13 15:32:56.810959 containerd[1492]: time="2025-02-13T15:32:56.810792230Z" level=info msg="StopPodSandbox for \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" returns successfully" Feb 13 15:32:56.811432 containerd[1492]: time="2025-02-13T15:32:56.811392714Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\"" Feb 13 15:32:56.811678 containerd[1492]: time="2025-02-13T15:32:56.811601075Z" level=info msg="TearDown network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" successfully" Feb 13 15:32:56.811678 containerd[1492]: time="2025-02-13T15:32:56.811615796Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" returns successfully" Feb 13 15:32:56.812768 containerd[1492]: time="2025-02-13T15:32:56.812631483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:32:56.815149 kubelet[2821]: I0213 15:32:56.815106 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f" Feb 13 15:32:56.817812 containerd[1492]: time="2025-02-13T15:32:56.816278229Z" level=info msg="StopPodSandbox for \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\"" Feb 13 15:32:56.817812 containerd[1492]: time="2025-02-13T15:32:56.816506031Z" level=info msg="Ensure that sandbox 2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f in task-service has been cleanup successfully" Feb 13 15:32:56.827425 kubelet[2821]: I0213 15:32:56.821972 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f" Feb 13 15:32:56.827085 systemd[1]: run-netns-cni\x2ddb50419b\x2dc0c8\x2d251e\x2dcf05\x2d93c6fc72c427.mount: Deactivated successfully. Feb 13 15:32:56.827668 containerd[1492]: time="2025-02-13T15:32:56.825359775Z" level=info msg="TearDown network for sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\" successfully" Feb 13 15:32:56.827668 containerd[1492]: time="2025-02-13T15:32:56.825396295Z" level=info msg="StopPodSandbox for \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\" returns successfully" Feb 13 15:32:56.832623 containerd[1492]: time="2025-02-13T15:32:56.829905768Z" level=info msg="StopPodSandbox for \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\"" Feb 13 15:32:56.832623 containerd[1492]: time="2025-02-13T15:32:56.830117330Z" level=info msg="TearDown network for sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" successfully" Feb 13 15:32:56.832623 containerd[1492]: time="2025-02-13T15:32:56.830135210Z" level=info msg="StopPodSandbox for \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" returns successfully" Feb 13 15:32:56.832623 containerd[1492]: time="2025-02-13T15:32:56.831740421Z" level=info msg="StopPodSandbox for \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\"" Feb 13 15:32:56.838856 containerd[1492]: time="2025-02-13T15:32:56.833984158Z" level=info msg="Ensure that sandbox 573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f in task-service has been cleanup successfully" Feb 13 15:32:56.838856 containerd[1492]: time="2025-02-13T15:32:56.834143559Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\"" Feb 13 15:32:56.838856 containerd[1492]: time="2025-02-13T15:32:56.834264360Z" level=info msg="TearDown network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" successfully" Feb 13 15:32:56.838856 containerd[1492]: time="2025-02-13T15:32:56.834275520Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" returns successfully" Feb 13 15:32:56.838856 containerd[1492]: time="2025-02-13T15:32:56.835405648Z" level=info msg="TearDown network for sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\" successfully" Feb 13 15:32:56.838856 containerd[1492]: time="2025-02-13T15:32:56.835433928Z" level=info msg="StopPodSandbox for \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\" returns successfully" Feb 13 15:32:56.840258 containerd[1492]: time="2025-02-13T15:32:56.839781280Z" level=info msg="StopPodSandbox for \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\"" Feb 13 15:32:56.840258 containerd[1492]: time="2025-02-13T15:32:56.839904481Z" level=info msg="TearDown network for sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" successfully" Feb 13 15:32:56.840258 containerd[1492]: time="2025-02-13T15:32:56.839914561Z" level=info msg="StopPodSandbox for \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" returns successfully" Feb 13 15:32:56.840782 systemd[1]: run-netns-cni\x2db051b084\x2d7df9\x2d515b\x2ddf25\x2d528ab30438de.mount: Deactivated successfully. Feb 13 15:32:56.842032 containerd[1492]: time="2025-02-13T15:32:56.841224570Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\"" Feb 13 15:32:56.842032 containerd[1492]: time="2025-02-13T15:32:56.841962896Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\"" Feb 13 15:32:56.842032 containerd[1492]: time="2025-02-13T15:32:56.841990296Z" level=info msg="TearDown network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" successfully" Feb 13 15:32:56.842920 containerd[1492]: time="2025-02-13T15:32:56.842009376Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" returns successfully" Feb 13 15:32:56.842920 containerd[1492]: time="2025-02-13T15:32:56.842136017Z" level=info msg="TearDown network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" successfully" Feb 13 15:32:56.842920 containerd[1492]: time="2025-02-13T15:32:56.842888542Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" returns successfully" Feb 13 15:32:56.843933 kubelet[2821]: I0213 15:32:56.843442 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8" Feb 13 15:32:56.844176 containerd[1492]: time="2025-02-13T15:32:56.844075351Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\"" Feb 13 15:32:56.844758 containerd[1492]: time="2025-02-13T15:32:56.844713996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:3,}" Feb 13 15:32:56.845999 containerd[1492]: time="2025-02-13T15:32:56.845467201Z" level=info msg="TearDown network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" successfully" Feb 13 15:32:56.846145 containerd[1492]: time="2025-02-13T15:32:56.846001485Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" returns successfully" Feb 13 15:32:56.846629 containerd[1492]: time="2025-02-13T15:32:56.846596169Z" level=info msg="StopPodSandbox for \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\"" Feb 13 15:32:56.847056 containerd[1492]: time="2025-02-13T15:32:56.846873011Z" level=info msg="Ensure that sandbox 47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8 in task-service has been cleanup successfully" Feb 13 15:32:56.849963 containerd[1492]: time="2025-02-13T15:32:56.849456750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:5,}" Feb 13 15:32:56.851075 containerd[1492]: time="2025-02-13T15:32:56.850949961Z" level=info msg="TearDown network for sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\" successfully" Feb 13 15:32:56.851075 containerd[1492]: time="2025-02-13T15:32:56.850986121Z" level=info msg="StopPodSandbox for \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\" returns successfully" Feb 13 15:32:56.851470 kubelet[2821]: I0213 15:32:56.851432 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815" Feb 13 15:32:56.853793 containerd[1492]: time="2025-02-13T15:32:56.853562100Z" level=info msg="StopPodSandbox for \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\"" Feb 13 15:32:56.853793 containerd[1492]: time="2025-02-13T15:32:56.853699541Z" level=info msg="TearDown network for sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" successfully" Feb 13 15:32:56.853793 containerd[1492]: time="2025-02-13T15:32:56.853713341Z" level=info msg="StopPodSandbox for \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" returns successfully" Feb 13 15:32:56.856170 containerd[1492]: time="2025-02-13T15:32:56.856127718Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\"" Feb 13 15:32:56.856992 containerd[1492]: time="2025-02-13T15:32:56.856613602Z" level=info msg="TearDown network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" successfully" Feb 13 15:32:56.856992 containerd[1492]: time="2025-02-13T15:32:56.856640562Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" returns successfully" Feb 13 15:32:56.856992 containerd[1492]: time="2025-02-13T15:32:56.856305120Z" level=info msg="StopPodSandbox for \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\"" Feb 13 15:32:56.856992 containerd[1492]: time="2025-02-13T15:32:56.856841123Z" level=info msg="Ensure that sandbox e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815 in task-service has been cleanup successfully" Feb 13 15:32:56.857472 containerd[1492]: time="2025-02-13T15:32:56.857449408Z" level=info msg="TearDown network for sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\" successfully" Feb 13 15:32:56.857628 containerd[1492]: time="2025-02-13T15:32:56.857610529Z" level=info msg="StopPodSandbox for \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\" returns successfully" Feb 13 15:32:56.860194 containerd[1492]: time="2025-02-13T15:32:56.860154748Z" level=info msg="StopPodSandbox for \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\"" Feb 13 15:32:56.861215 containerd[1492]: time="2025-02-13T15:32:56.860447870Z" level=info msg="TearDown network for sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" successfully" Feb 13 15:32:56.861215 containerd[1492]: time="2025-02-13T15:32:56.860467950Z" level=info msg="StopPodSandbox for \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" returns successfully" Feb 13 15:32:56.861521 containerd[1492]: time="2025-02-13T15:32:56.861467917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:32:56.864615 containerd[1492]: time="2025-02-13T15:32:56.864576180Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\"" Feb 13 15:32:56.864825 containerd[1492]: time="2025-02-13T15:32:56.864810221Z" level=info msg="TearDown network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" successfully" Feb 13 15:32:56.865537 containerd[1492]: time="2025-02-13T15:32:56.865500266Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" returns successfully" Feb 13 15:32:56.867402 containerd[1492]: time="2025-02-13T15:32:56.867316719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:3,}" Feb 13 15:32:57.113572 containerd[1492]: time="2025-02-13T15:32:57.113102695Z" level=error msg="Failed to destroy network for sandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.116382 containerd[1492]: time="2025-02-13T15:32:57.116304718Z" level=error msg="encountered an error cleaning up failed sandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.117881 containerd[1492]: time="2025-02-13T15:32:57.117467366Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.119583 kubelet[2821]: E0213 15:32:57.118431 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.119583 kubelet[2821]: E0213 15:32:57.118503 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:57.119583 kubelet[2821]: E0213 15:32:57.118524 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:57.119781 kubelet[2821]: E0213 15:32:57.118600 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78f7c5565-5c4qp_calico-apiserver(42f57691-268d-46e2-b88f-eb306fac4b02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78f7c5565-5c4qp_calico-apiserver(42f57691-268d-46e2-b88f-eb306fac4b02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" podUID="42f57691-268d-46e2-b88f-eb306fac4b02" Feb 13 15:32:57.164339 containerd[1492]: time="2025-02-13T15:32:57.163914260Z" level=error msg="Failed to destroy network for sandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.168074 containerd[1492]: time="2025-02-13T15:32:57.167999169Z" level=error msg="encountered an error cleaning up failed sandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.168227 containerd[1492]: time="2025-02-13T15:32:57.168104130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.168438 containerd[1492]: time="2025-02-13T15:32:57.168407412Z" level=error msg="Failed to destroy network for sandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.169682 containerd[1492]: time="2025-02-13T15:32:57.169097697Z" level=error msg="encountered an error cleaning up failed sandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.169682 containerd[1492]: time="2025-02-13T15:32:57.169235858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.169848 kubelet[2821]: E0213 15:32:57.169345 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.169848 kubelet[2821]: E0213 15:32:57.169443 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:57.169848 kubelet[2821]: E0213 15:32:57.169612 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:57.170177 kubelet[2821]: E0213 15:32:57.170052 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.170177 kubelet[2821]: E0213 15:32:57.170094 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:57.170177 kubelet[2821]: E0213 15:32:57.170111 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:57.170337 kubelet[2821]: E0213 15:32:57.170150 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-84sk8_kube-system(13d5c8c3-cbc0-413c-8112-1a04a642e871)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-84sk8_kube-system(13d5c8c3-cbc0-413c-8112-1a04a642e871)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-84sk8" podUID="13d5c8c3-cbc0-413c-8112-1a04a642e871" Feb 13 15:32:57.170773 kubelet[2821]: E0213 15:32:57.170405 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" podUID="312ba18f-ce14-4faf-8d42-7109fe1d16cd" Feb 13 15:32:57.185682 containerd[1492]: time="2025-02-13T15:32:57.185533255Z" level=error msg="Failed to destroy network for sandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.188737 containerd[1492]: time="2025-02-13T15:32:57.187890672Z" level=error msg="encountered an error cleaning up failed sandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.188737 containerd[1492]: time="2025-02-13T15:32:57.187977273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.188934 kubelet[2821]: E0213 15:32:57.188199 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.188934 kubelet[2821]: E0213 15:32:57.188254 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:57.188934 kubelet[2821]: E0213 15:32:57.188277 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:57.189016 kubelet[2821]: E0213 15:32:57.188329 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4x9md_kube-system(79306bcb-17fd-459b-b782-0d95273cdb59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4x9md_kube-system(79306bcb-17fd-459b-b782-0d95273cdb59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4x9md" podUID="79306bcb-17fd-459b-b782-0d95273cdb59" Feb 13 15:32:57.195829 containerd[1492]: time="2025-02-13T15:32:57.195768169Z" level=error msg="Failed to destroy network for sandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.198878 containerd[1492]: time="2025-02-13T15:32:57.198339227Z" level=error msg="encountered an error cleaning up failed sandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.199210 containerd[1492]: time="2025-02-13T15:32:57.199176633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.200412 kubelet[2821]: E0213 15:32:57.199858 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.200412 kubelet[2821]: E0213 15:32:57.199924 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:57.200412 kubelet[2821]: E0213 15:32:57.199944 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:57.200626 kubelet[2821]: E0213 15:32:57.199998 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78f7c5565-fnfv9_calico-apiserver(945269f2-3dde-4aed-82a0-7f736010a34e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78f7c5565-fnfv9_calico-apiserver(945269f2-3dde-4aed-82a0-7f736010a34e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" podUID="945269f2-3dde-4aed-82a0-7f736010a34e" Feb 13 15:32:57.209808 containerd[1492]: time="2025-02-13T15:32:57.209756349Z" level=error msg="Failed to destroy network for sandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.211019 containerd[1492]: time="2025-02-13T15:32:57.210369634Z" level=error msg="encountered an error cleaning up failed sandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.211954 containerd[1492]: time="2025-02-13T15:32:57.211048319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.212022 kubelet[2821]: E0213 15:32:57.211326 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:57.212022 kubelet[2821]: E0213 15:32:57.211447 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:57.212022 kubelet[2821]: E0213 15:32:57.211477 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:57.212206 kubelet[2821]: E0213 15:32:57.211586 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:57.576859 systemd[1]: run-netns-cni\x2d4dd4a821\x2d4091\x2db4fe\x2ddd9e\x2dedf0ba7f7154.mount: Deactivated successfully. Feb 13 15:32:57.577330 systemd[1]: run-netns-cni\x2d1a243182\x2d0776\x2de2c6\x2d9537\x2d4664793efb87.mount: Deactivated successfully. Feb 13 15:32:57.697700 kubelet[2821]: I0213 15:32:57.696764 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:32:57.862602 kubelet[2821]: I0213 15:32:57.862147 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220" Feb 13 15:32:57.865248 containerd[1492]: time="2025-02-13T15:32:57.865179660Z" level=info msg="StopPodSandbox for \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\"" Feb 13 15:32:57.868680 containerd[1492]: time="2025-02-13T15:32:57.866014626Z" level=info msg="Ensure that sandbox a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220 in task-service has been cleanup successfully" Feb 13 15:32:57.868680 containerd[1492]: time="2025-02-13T15:32:57.868450243Z" level=info msg="TearDown network for sandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\" successfully" Feb 13 15:32:57.868680 containerd[1492]: time="2025-02-13T15:32:57.868481243Z" level=info msg="StopPodSandbox for \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\" returns successfully" Feb 13 15:32:57.868226 systemd[1]: run-netns-cni\x2de57cfb5d\x2dd18d\x2d122e\x2d1b45\x2d63ac82513393.mount: Deactivated successfully. Feb 13 15:32:57.870182 containerd[1492]: time="2025-02-13T15:32:57.869792853Z" level=info msg="StopPodSandbox for \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\"" Feb 13 15:32:57.870182 containerd[1492]: time="2025-02-13T15:32:57.869901494Z" level=info msg="TearDown network for sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\" successfully" Feb 13 15:32:57.870182 containerd[1492]: time="2025-02-13T15:32:57.869910934Z" level=info msg="StopPodSandbox for \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\" returns successfully" Feb 13 15:32:57.871790 containerd[1492]: time="2025-02-13T15:32:57.871321784Z" level=info msg="StopPodSandbox for \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\"" Feb 13 15:32:57.871790 containerd[1492]: time="2025-02-13T15:32:57.871620626Z" level=info msg="TearDown network for sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" successfully" Feb 13 15:32:57.871790 containerd[1492]: time="2025-02-13T15:32:57.871633946Z" level=info msg="StopPodSandbox for \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" returns successfully" Feb 13 15:32:57.873056 containerd[1492]: time="2025-02-13T15:32:57.872537393Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\"" Feb 13 15:32:57.873056 containerd[1492]: time="2025-02-13T15:32:57.872628993Z" level=info msg="TearDown network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" successfully" Feb 13 15:32:57.873056 containerd[1492]: time="2025-02-13T15:32:57.872638033Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" returns successfully" Feb 13 15:32:57.874711 kubelet[2821]: I0213 15:32:57.874684 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968" Feb 13 15:32:57.876185 containerd[1492]: time="2025-02-13T15:32:57.876140498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:4,}" Feb 13 15:32:57.884663 containerd[1492]: time="2025-02-13T15:32:57.884162796Z" level=info msg="StopPodSandbox for \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\"" Feb 13 15:32:57.885721 containerd[1492]: time="2025-02-13T15:32:57.885205324Z" level=info msg="Ensure that sandbox 59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968 in task-service has been cleanup successfully" Feb 13 15:32:57.889230 containerd[1492]: time="2025-02-13T15:32:57.886905016Z" level=info msg="TearDown network for sandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\" successfully" Feb 13 15:32:57.889230 containerd[1492]: time="2025-02-13T15:32:57.886942936Z" level=info msg="StopPodSandbox for \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\" returns successfully" Feb 13 15:32:57.890688 containerd[1492]: time="2025-02-13T15:32:57.890085119Z" level=info msg="StopPodSandbox for \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\"" Feb 13 15:32:57.890688 containerd[1492]: time="2025-02-13T15:32:57.890198279Z" level=info msg="TearDown network for sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\" successfully" Feb 13 15:32:57.890688 containerd[1492]: time="2025-02-13T15:32:57.890208800Z" level=info msg="StopPodSandbox for \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\" returns successfully" Feb 13 15:32:57.890873 containerd[1492]: time="2025-02-13T15:32:57.890842644Z" level=info msg="StopPodSandbox for \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\"" Feb 13 15:32:57.891701 containerd[1492]: time="2025-02-13T15:32:57.890929805Z" level=info msg="TearDown network for sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" successfully" Feb 13 15:32:57.891701 containerd[1492]: time="2025-02-13T15:32:57.890946645Z" level=info msg="StopPodSandbox for \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" returns successfully" Feb 13 15:32:57.892989 systemd[1]: run-netns-cni\x2db759e908\x2dee0d\x2de93b\x2d8091\x2d4469b29b86b8.mount: Deactivated successfully. Feb 13 15:32:57.894543 containerd[1492]: time="2025-02-13T15:32:57.893596704Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\"" Feb 13 15:32:57.894543 containerd[1492]: time="2025-02-13T15:32:57.893763185Z" level=info msg="TearDown network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" successfully" Feb 13 15:32:57.894543 containerd[1492]: time="2025-02-13T15:32:57.893777185Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" returns successfully" Feb 13 15:32:57.896739 containerd[1492]: time="2025-02-13T15:32:57.896692806Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\"" Feb 13 15:32:57.896885 containerd[1492]: time="2025-02-13T15:32:57.896865447Z" level=info msg="TearDown network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" successfully" Feb 13 15:32:57.896885 containerd[1492]: time="2025-02-13T15:32:57.896882888Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" returns successfully" Feb 13 15:32:57.897528 kubelet[2821]: I0213 15:32:57.897445 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575" Feb 13 15:32:57.900259 containerd[1492]: time="2025-02-13T15:32:57.900182191Z" level=info msg="StopPodSandbox for \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\"" Feb 13 15:32:57.900460 containerd[1492]: time="2025-02-13T15:32:57.900422473Z" level=info msg="Ensure that sandbox a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575 in task-service has been cleanup successfully" Feb 13 15:32:57.903562 containerd[1492]: time="2025-02-13T15:32:57.901797003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:5,}" Feb 13 15:32:57.903562 containerd[1492]: time="2025-02-13T15:32:57.902184806Z" level=info msg="TearDown network for sandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\" successfully" Feb 13 15:32:57.903562 containerd[1492]: time="2025-02-13T15:32:57.902199806Z" level=info msg="StopPodSandbox for \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\" returns successfully" Feb 13 15:32:57.905724 systemd[1]: run-netns-cni\x2dab604fb1\x2d3905\x2d7c6b\x2df76e\x2dfa4119d4b566.mount: Deactivated successfully. Feb 13 15:32:57.908255 containerd[1492]: time="2025-02-13T15:32:57.907730125Z" level=info msg="StopPodSandbox for \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\"" Feb 13 15:32:57.908255 containerd[1492]: time="2025-02-13T15:32:57.907868646Z" level=info msg="TearDown network for sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\" successfully" Feb 13 15:32:57.908255 containerd[1492]: time="2025-02-13T15:32:57.907879567Z" level=info msg="StopPodSandbox for \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\" returns successfully" Feb 13 15:32:57.919857 containerd[1492]: time="2025-02-13T15:32:57.919564731Z" level=info msg="StopPodSandbox for \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\"" Feb 13 15:32:57.919857 containerd[1492]: time="2025-02-13T15:32:57.919700011Z" level=info msg="TearDown network for sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" successfully" Feb 13 15:32:57.919857 containerd[1492]: time="2025-02-13T15:32:57.919710532Z" level=info msg="StopPodSandbox for \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" returns successfully" Feb 13 15:32:57.921938 containerd[1492]: time="2025-02-13T15:32:57.921615265Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\"" Feb 13 15:32:57.921938 containerd[1492]: time="2025-02-13T15:32:57.921730866Z" level=info msg="TearDown network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" successfully" Feb 13 15:32:57.921938 containerd[1492]: time="2025-02-13T15:32:57.921740706Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" returns successfully" Feb 13 15:32:57.923229 containerd[1492]: time="2025-02-13T15:32:57.923111956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:32:57.932825 kubelet[2821]: I0213 15:32:57.932764 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3" Feb 13 15:32:57.934275 containerd[1492]: time="2025-02-13T15:32:57.933809073Z" level=info msg="StopPodSandbox for \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\"" Feb 13 15:32:57.934275 containerd[1492]: time="2025-02-13T15:32:57.934025994Z" level=info msg="Ensure that sandbox e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3 in task-service has been cleanup successfully" Feb 13 15:32:57.938314 containerd[1492]: time="2025-02-13T15:32:57.937636780Z" level=info msg="TearDown network for sandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\" successfully" Feb 13 15:32:57.939478 containerd[1492]: time="2025-02-13T15:32:57.939444673Z" level=info msg="StopPodSandbox for \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\" returns successfully" Feb 13 15:32:57.943531 containerd[1492]: time="2025-02-13T15:32:57.943480182Z" level=info msg="StopPodSandbox for \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\"" Feb 13 15:32:57.944813 containerd[1492]: time="2025-02-13T15:32:57.944777712Z" level=info msg="TearDown network for sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\" successfully" Feb 13 15:32:57.945014 containerd[1492]: time="2025-02-13T15:32:57.944995793Z" level=info msg="StopPodSandbox for \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\" returns successfully" Feb 13 15:32:57.947836 containerd[1492]: time="2025-02-13T15:32:57.947786933Z" level=info msg="StopPodSandbox for \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\"" Feb 13 15:32:57.947990 containerd[1492]: time="2025-02-13T15:32:57.947917534Z" level=info msg="TearDown network for sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" successfully" Feb 13 15:32:57.947990 containerd[1492]: time="2025-02-13T15:32:57.947929014Z" level=info msg="StopPodSandbox for \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" returns successfully" Feb 13 15:32:57.948378 kubelet[2821]: I0213 15:32:57.948328 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4" Feb 13 15:32:57.949023 containerd[1492]: time="2025-02-13T15:32:57.948923342Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\"" Feb 13 15:32:57.949788 containerd[1492]: time="2025-02-13T15:32:57.949465065Z" level=info msg="TearDown network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" successfully" Feb 13 15:32:57.949788 containerd[1492]: time="2025-02-13T15:32:57.949495986Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" returns successfully" Feb 13 15:32:57.952414 containerd[1492]: time="2025-02-13T15:32:57.952283166Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\"" Feb 13 15:32:57.953892 containerd[1492]: time="2025-02-13T15:32:57.953852777Z" level=info msg="TearDown network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" successfully" Feb 13 15:32:57.953892 containerd[1492]: time="2025-02-13T15:32:57.953884577Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" returns successfully" Feb 13 15:32:57.954476 containerd[1492]: time="2025-02-13T15:32:57.952500047Z" level=info msg="StopPodSandbox for \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\"" Feb 13 15:32:57.954476 containerd[1492]: time="2025-02-13T15:32:57.954119419Z" level=info msg="Ensure that sandbox ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4 in task-service has been cleanup successfully" Feb 13 15:32:57.955112 containerd[1492]: time="2025-02-13T15:32:57.954883544Z" level=info msg="TearDown network for sandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\" successfully" Feb 13 15:32:57.956163 containerd[1492]: time="2025-02-13T15:32:57.956122993Z" level=info msg="StopPodSandbox for \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\" returns successfully" Feb 13 15:32:57.958385 containerd[1492]: time="2025-02-13T15:32:57.958154808Z" level=info msg="StopPodSandbox for \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\"" Feb 13 15:32:57.959243 containerd[1492]: time="2025-02-13T15:32:57.959207295Z" level=info msg="TearDown network for sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\" successfully" Feb 13 15:32:57.959243 containerd[1492]: time="2025-02-13T15:32:57.959235776Z" level=info msg="StopPodSandbox for \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\" returns successfully" Feb 13 15:32:57.959462 containerd[1492]: time="2025-02-13T15:32:57.959442817Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\"" Feb 13 15:32:57.959542 containerd[1492]: time="2025-02-13T15:32:57.959530738Z" level=info msg="TearDown network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" successfully" Feb 13 15:32:57.959542 containerd[1492]: time="2025-02-13T15:32:57.959541898Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" returns successfully" Feb 13 15:32:57.963052 containerd[1492]: time="2025-02-13T15:32:57.963015443Z" level=info msg="StopPodSandbox for \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\"" Feb 13 15:32:57.964183 containerd[1492]: time="2025-02-13T15:32:57.963835929Z" level=info msg="TearDown network for sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" successfully" Feb 13 15:32:57.964183 containerd[1492]: time="2025-02-13T15:32:57.963857409Z" level=info msg="StopPodSandbox for \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" returns successfully" Feb 13 15:32:57.964183 containerd[1492]: time="2025-02-13T15:32:57.963043603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:6,}" Feb 13 15:32:57.966603 containerd[1492]: time="2025-02-13T15:32:57.966563948Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\"" Feb 13 15:32:57.966840 containerd[1492]: time="2025-02-13T15:32:57.966823510Z" level=info msg="TearDown network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" successfully" Feb 13 15:32:57.966923 containerd[1492]: time="2025-02-13T15:32:57.966906591Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" returns successfully" Feb 13 15:32:57.967712 containerd[1492]: time="2025-02-13T15:32:57.967681436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:4,}" Feb 13 15:32:57.970171 kubelet[2821]: I0213 15:32:57.970135 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2" Feb 13 15:32:57.973580 containerd[1492]: time="2025-02-13T15:32:57.973517438Z" level=info msg="StopPodSandbox for \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\"" Feb 13 15:32:57.973920 containerd[1492]: time="2025-02-13T15:32:57.973898801Z" level=info msg="Ensure that sandbox 6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2 in task-service has been cleanup successfully" Feb 13 15:32:57.975577 containerd[1492]: time="2025-02-13T15:32:57.975466732Z" level=info msg="TearDown network for sandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\" successfully" Feb 13 15:32:57.976028 containerd[1492]: time="2025-02-13T15:32:57.975668294Z" level=info msg="StopPodSandbox for \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\" returns successfully" Feb 13 15:32:57.979754 containerd[1492]: time="2025-02-13T15:32:57.979708203Z" level=info msg="StopPodSandbox for \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\"" Feb 13 15:32:57.980041 containerd[1492]: time="2025-02-13T15:32:57.980022805Z" level=info msg="TearDown network for sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\" successfully" Feb 13 15:32:57.980112 containerd[1492]: time="2025-02-13T15:32:57.980098486Z" level=info msg="StopPodSandbox for \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\" returns successfully" Feb 13 15:32:57.980841 containerd[1492]: time="2025-02-13T15:32:57.980815411Z" level=info msg="StopPodSandbox for \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\"" Feb 13 15:32:57.981118 containerd[1492]: time="2025-02-13T15:32:57.981028052Z" level=info msg="TearDown network for sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" successfully" Feb 13 15:32:57.981118 containerd[1492]: time="2025-02-13T15:32:57.981048172Z" level=info msg="StopPodSandbox for \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" returns successfully" Feb 13 15:32:57.982316 containerd[1492]: time="2025-02-13T15:32:57.982112340Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\"" Feb 13 15:32:57.982316 containerd[1492]: time="2025-02-13T15:32:57.982205461Z" level=info msg="TearDown network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" successfully" Feb 13 15:32:57.982316 containerd[1492]: time="2025-02-13T15:32:57.982214861Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" returns successfully" Feb 13 15:32:57.983239 containerd[1492]: time="2025-02-13T15:32:57.983206268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:32:58.211332 containerd[1492]: time="2025-02-13T15:32:58.211106013Z" level=error msg="Failed to destroy network for sandbox \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.214773 containerd[1492]: time="2025-02-13T15:32:58.214636278Z" level=error msg="Failed to destroy network for sandbox \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.215996 containerd[1492]: time="2025-02-13T15:32:58.215798606Z" level=error msg="encountered an error cleaning up failed sandbox \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.216210 containerd[1492]: time="2025-02-13T15:32:58.216126368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.218328 containerd[1492]: time="2025-02-13T15:32:58.216431570Z" level=error msg="encountered an error cleaning up failed sandbox \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.218328 containerd[1492]: time="2025-02-13T15:32:58.217015095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.218328 containerd[1492]: time="2025-02-13T15:32:58.217628459Z" level=error msg="Failed to destroy network for sandbox \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.219550 kubelet[2821]: E0213 15:32:58.217208 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.219550 kubelet[2821]: E0213 15:32:58.217219 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.219550 kubelet[2821]: E0213 15:32:58.217259 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:58.219550 kubelet[2821]: E0213 15:32:58.217301 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" Feb 13 15:32:58.219679 kubelet[2821]: E0213 15:32:58.217262 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:58.219679 kubelet[2821]: E0213 15:32:58.217383 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4x9md" Feb 13 15:32:58.219679 kubelet[2821]: E0213 15:32:58.217416 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78f7c5565-5c4qp_calico-apiserver(42f57691-268d-46e2-b88f-eb306fac4b02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78f7c5565-5c4qp_calico-apiserver(42f57691-268d-46e2-b88f-eb306fac4b02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" podUID="42f57691-268d-46e2-b88f-eb306fac4b02" Feb 13 15:32:58.219763 kubelet[2821]: E0213 15:32:58.217428 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4x9md_kube-system(79306bcb-17fd-459b-b782-0d95273cdb59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4x9md_kube-system(79306bcb-17fd-459b-b782-0d95273cdb59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4x9md" podUID="79306bcb-17fd-459b-b782-0d95273cdb59" Feb 13 15:32:58.222427 containerd[1492]: time="2025-02-13T15:32:58.221009003Z" level=error msg="encountered an error cleaning up failed sandbox \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.222427 containerd[1492]: time="2025-02-13T15:32:58.222013290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.225954 kubelet[2821]: E0213 15:32:58.223297 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.225954 kubelet[2821]: E0213 15:32:58.223372 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:58.225954 kubelet[2821]: E0213 15:32:58.223397 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" Feb 13 15:32:58.226169 kubelet[2821]: E0213 15:32:58.223454 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b47d5c589-xd5hs_calico-system(312ba18f-ce14-4faf-8d42-7109fe1d16cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" podUID="312ba18f-ce14-4faf-8d42-7109fe1d16cd" Feb 13 15:32:58.246025 containerd[1492]: time="2025-02-13T15:32:58.245945701Z" level=error msg="Failed to destroy network for sandbox \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.248335 containerd[1492]: time="2025-02-13T15:32:58.247239150Z" level=error msg="encountered an error cleaning up failed sandbox \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.248335 containerd[1492]: time="2025-02-13T15:32:58.247891395Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.248617 kubelet[2821]: E0213 15:32:58.248447 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.248617 kubelet[2821]: E0213 15:32:58.248521 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:58.248617 kubelet[2821]: E0213 15:32:58.248545 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-84sk8" Feb 13 15:32:58.248759 kubelet[2821]: E0213 15:32:58.248601 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-84sk8_kube-system(13d5c8c3-cbc0-413c-8112-1a04a642e871)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-84sk8_kube-system(13d5c8c3-cbc0-413c-8112-1a04a642e871)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-84sk8" podUID="13d5c8c3-cbc0-413c-8112-1a04a642e871" Feb 13 15:32:58.264631 containerd[1492]: time="2025-02-13T15:32:58.264579153Z" level=error msg="Failed to destroy network for sandbox \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.265636 containerd[1492]: time="2025-02-13T15:32:58.265588121Z" level=error msg="encountered an error cleaning up failed sandbox \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.266131 containerd[1492]: time="2025-02-13T15:32:58.266018404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.266400 containerd[1492]: time="2025-02-13T15:32:58.265902043Z" level=error msg="Failed to destroy network for sandbox \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.266787 kubelet[2821]: E0213 15:32:58.266475 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.266787 kubelet[2821]: E0213 15:32:58.266583 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:58.266787 kubelet[2821]: E0213 15:32:58.266604 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6lfhr" Feb 13 15:32:58.266880 kubelet[2821]: E0213 15:32:58.266667 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6lfhr_calico-system(548c309d-1177-42c0-887f-c4ea253c82f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6lfhr" podUID="548c309d-1177-42c0-887f-c4ea253c82f9" Feb 13 15:32:58.268279 containerd[1492]: time="2025-02-13T15:32:58.268152099Z" level=error msg="encountered an error cleaning up failed sandbox \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.268605 containerd[1492]: time="2025-02-13T15:32:58.268448021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.269149 kubelet[2821]: E0213 15:32:58.269126 2821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:32:58.269232 kubelet[2821]: E0213 15:32:58.269183 2821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:58.269232 kubelet[2821]: E0213 15:32:58.269207 2821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" Feb 13 15:32:58.269280 kubelet[2821]: E0213 15:32:58.269255 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78f7c5565-fnfv9_calico-apiserver(945269f2-3dde-4aed-82a0-7f736010a34e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78f7c5565-fnfv9_calico-apiserver(945269f2-3dde-4aed-82a0-7f736010a34e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" podUID="945269f2-3dde-4aed-82a0-7f736010a34e" Feb 13 15:32:58.356476 containerd[1492]: time="2025-02-13T15:32:58.355691842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:58.357127 containerd[1492]: time="2025-02-13T15:32:58.357070252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 15:32:58.357507 containerd[1492]: time="2025-02-13T15:32:58.357463495Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:58.361034 containerd[1492]: time="2025-02-13T15:32:58.360972640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:32:58.361842 containerd[1492]: time="2025-02-13T15:32:58.361792646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 5.758156173s" Feb 13 15:32:58.361842 containerd[1492]: time="2025-02-13T15:32:58.361840166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 15:32:58.371187 containerd[1492]: time="2025-02-13T15:32:58.371140913Z" level=info msg="CreateContainer within sandbox \"7d0a071b852ecd66d2bd48d6e818ef317a4111545642221ea46ef2178e55756b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:32:58.388475 containerd[1492]: time="2025-02-13T15:32:58.388391715Z" level=info msg="CreateContainer within sandbox \"7d0a071b852ecd66d2bd48d6e818ef317a4111545642221ea46ef2178e55756b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d1f83c690d58a84a3d7c4454aaf103c44c35520ceeba3c5cd46d1c4c7af03c5f\"" Feb 13 15:32:58.389304 containerd[1492]: time="2025-02-13T15:32:58.389270322Z" level=info msg="StartContainer for \"d1f83c690d58a84a3d7c4454aaf103c44c35520ceeba3c5cd46d1c4c7af03c5f\"" Feb 13 15:32:58.423626 systemd[1]: Started cri-containerd-d1f83c690d58a84a3d7c4454aaf103c44c35520ceeba3c5cd46d1c4c7af03c5f.scope - libcontainer container d1f83c690d58a84a3d7c4454aaf103c44c35520ceeba3c5cd46d1c4c7af03c5f. Feb 13 15:32:58.462011 containerd[1492]: time="2025-02-13T15:32:58.461811838Z" level=info msg="StartContainer for \"d1f83c690d58a84a3d7c4454aaf103c44c35520ceeba3c5cd46d1c4c7af03c5f\" returns successfully" Feb 13 15:32:58.570894 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:32:58.571004 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:32:58.583214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8-shm.mount: Deactivated successfully. Feb 13 15:32:58.583564 systemd[1]: run-netns-cni\x2d86311e94\x2dbf81\x2d0aa0\x2d3746\x2d7761ee5bfacb.mount: Deactivated successfully. Feb 13 15:32:58.583709 systemd[1]: run-netns-cni\x2d0b340eda\x2d6b39\x2d6beb\x2d34cf\x2d1ea6fc1e3a57.mount: Deactivated successfully. Feb 13 15:32:58.583922 systemd[1]: run-netns-cni\x2d5fe3a129\x2dabe3\x2dfd06\x2deeff\x2d7668574644bc.mount: Deactivated successfully. Feb 13 15:32:58.584053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532051479.mount: Deactivated successfully. Feb 13 15:32:58.983112 kubelet[2821]: I0213 15:32:58.982711 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8" Feb 13 15:32:58.986565 containerd[1492]: time="2025-02-13T15:32:58.985430328Z" level=info msg="StopPodSandbox for \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\"" Feb 13 15:32:58.986565 containerd[1492]: time="2025-02-13T15:32:58.985673570Z" level=info msg="Ensure that sandbox c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8 in task-service has been cleanup successfully" Feb 13 15:32:58.991611 containerd[1492]: time="2025-02-13T15:32:58.988698952Z" level=info msg="TearDown network for sandbox \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\" successfully" Feb 13 15:32:58.991611 containerd[1492]: time="2025-02-13T15:32:58.988752352Z" level=info msg="StopPodSandbox for \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\" returns successfully" Feb 13 15:32:58.991543 systemd[1]: run-netns-cni\x2d176ae6d2\x2d1daf\x2d1e07\x2da1cf\x2d9796f32f7e88.mount: Deactivated successfully. Feb 13 15:32:58.994446 containerd[1492]: time="2025-02-13T15:32:58.993785748Z" level=info msg="StopPodSandbox for \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\"" Feb 13 15:32:58.994446 containerd[1492]: time="2025-02-13T15:32:58.993915869Z" level=info msg="TearDown network for sandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\" successfully" Feb 13 15:32:58.994446 containerd[1492]: time="2025-02-13T15:32:58.993926309Z" level=info msg="StopPodSandbox for \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\" returns successfully" Feb 13 15:32:58.995542 containerd[1492]: time="2025-02-13T15:32:58.995198398Z" level=info msg="StopPodSandbox for \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\"" Feb 13 15:32:58.995542 containerd[1492]: time="2025-02-13T15:32:58.995398439Z" level=info msg="TearDown network for sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\" successfully" Feb 13 15:32:58.995542 containerd[1492]: time="2025-02-13T15:32:58.995415520Z" level=info msg="StopPodSandbox for \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\" returns successfully" Feb 13 15:32:58.996192 containerd[1492]: time="2025-02-13T15:32:58.996164965Z" level=info msg="StopPodSandbox for \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\"" Feb 13 15:32:58.996291 containerd[1492]: time="2025-02-13T15:32:58.996273086Z" level=info msg="TearDown network for sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" successfully" Feb 13 15:32:58.996291 containerd[1492]: time="2025-02-13T15:32:58.996289126Z" level=info msg="StopPodSandbox for \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" returns successfully" Feb 13 15:32:58.997230 containerd[1492]: time="2025-02-13T15:32:58.997180692Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\"" Feb 13 15:32:58.998065 containerd[1492]: time="2025-02-13T15:32:58.997594295Z" level=info msg="TearDown network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" successfully" Feb 13 15:32:58.998065 containerd[1492]: time="2025-02-13T15:32:58.997619895Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" returns successfully" Feb 13 15:32:58.998738 containerd[1492]: time="2025-02-13T15:32:58.998446741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:5,}" Feb 13 15:32:58.999696 kubelet[2821]: I0213 15:32:58.999669 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60" Feb 13 15:32:59.000768 containerd[1492]: time="2025-02-13T15:32:59.000703197Z" level=info msg="StopPodSandbox for \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\"" Feb 13 15:32:59.003421 containerd[1492]: time="2025-02-13T15:32:59.000950519Z" level=info msg="Ensure that sandbox 6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60 in task-service has been cleanup successfully" Feb 13 15:32:59.003421 containerd[1492]: time="2025-02-13T15:32:59.002691211Z" level=info msg="TearDown network for sandbox \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\" successfully" Feb 13 15:32:59.003421 containerd[1492]: time="2025-02-13T15:32:59.002717332Z" level=info msg="StopPodSandbox for \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\" returns successfully" Feb 13 15:32:59.010259 containerd[1492]: time="2025-02-13T15:32:59.006332477Z" level=info msg="StopPodSandbox for \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\"" Feb 13 15:32:59.010259 containerd[1492]: time="2025-02-13T15:32:59.008268251Z" level=info msg="TearDown network for sandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\" successfully" Feb 13 15:32:59.010259 containerd[1492]: time="2025-02-13T15:32:59.008680574Z" level=info msg="StopPodSandbox for \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\" returns successfully" Feb 13 15:32:59.009505 systemd[1]: run-netns-cni\x2d5a5cb4a4\x2d6e39\x2d835b\x2da508\x2de355401277ef.mount: Deactivated successfully. Feb 13 15:32:59.014014 containerd[1492]: time="2025-02-13T15:32:59.012104478Z" level=info msg="StopPodSandbox for \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\"" Feb 13 15:32:59.014014 containerd[1492]: time="2025-02-13T15:32:59.012233559Z" level=info msg="TearDown network for sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\" successfully" Feb 13 15:32:59.014014 containerd[1492]: time="2025-02-13T15:32:59.012245959Z" level=info msg="StopPodSandbox for \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\" returns successfully" Feb 13 15:32:59.015683 containerd[1492]: time="2025-02-13T15:32:59.015432181Z" level=info msg="StopPodSandbox for \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\"" Feb 13 15:32:59.015683 containerd[1492]: time="2025-02-13T15:32:59.015577502Z" level=info msg="TearDown network for sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" successfully" Feb 13 15:32:59.015683 containerd[1492]: time="2025-02-13T15:32:59.015593142Z" level=info msg="StopPodSandbox for \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" returns successfully" Feb 13 15:32:59.022127 containerd[1492]: time="2025-02-13T15:32:59.021705026Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\"" Feb 13 15:32:59.022127 containerd[1492]: time="2025-02-13T15:32:59.021834707Z" level=info msg="TearDown network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" successfully" Feb 13 15:32:59.022127 containerd[1492]: time="2025-02-13T15:32:59.021845307Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" returns successfully" Feb 13 15:32:59.024058 kubelet[2821]: I0213 15:32:59.024031 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a" Feb 13 15:32:59.024324 containerd[1492]: time="2025-02-13T15:32:59.024185003Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\"" Feb 13 15:32:59.025205 containerd[1492]: time="2025-02-13T15:32:59.025160650Z" level=info msg="TearDown network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" successfully" Feb 13 15:32:59.025205 containerd[1492]: time="2025-02-13T15:32:59.025201450Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" returns successfully" Feb 13 15:32:59.031488 containerd[1492]: time="2025-02-13T15:32:59.026291858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:6,}" Feb 13 15:32:59.037481 containerd[1492]: time="2025-02-13T15:32:59.037061654Z" level=info msg="StopPodSandbox for \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\"" Feb 13 15:32:59.037481 containerd[1492]: time="2025-02-13T15:32:59.037298096Z" level=info msg="Ensure that sandbox fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a in task-service has been cleanup successfully" Feb 13 15:32:59.041891 containerd[1492]: time="2025-02-13T15:32:59.041821048Z" level=info msg="TearDown network for sandbox \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\" successfully" Feb 13 15:32:59.041891 containerd[1492]: time="2025-02-13T15:32:59.041881928Z" level=info msg="StopPodSandbox for \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\" returns successfully" Feb 13 15:32:59.044475 containerd[1492]: time="2025-02-13T15:32:59.044428706Z" level=info msg="StopPodSandbox for \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\"" Feb 13 15:32:59.044843 containerd[1492]: time="2025-02-13T15:32:59.044691468Z" level=info msg="TearDown network for sandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\" successfully" Feb 13 15:32:59.044879 containerd[1492]: time="2025-02-13T15:32:59.044841989Z" level=info msg="StopPodSandbox for \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\" returns successfully" Feb 13 15:32:59.045637 containerd[1492]: time="2025-02-13T15:32:59.045604554Z" level=info msg="StopPodSandbox for \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\"" Feb 13 15:32:59.046159 containerd[1492]: time="2025-02-13T15:32:59.046005997Z" level=info msg="TearDown network for sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\" successfully" Feb 13 15:32:59.046159 containerd[1492]: time="2025-02-13T15:32:59.046039397Z" level=info msg="StopPodSandbox for \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\" returns successfully" Feb 13 15:32:59.046920 containerd[1492]: time="2025-02-13T15:32:59.046441560Z" level=info msg="StopPodSandbox for \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\"" Feb 13 15:32:59.046920 containerd[1492]: time="2025-02-13T15:32:59.046554281Z" level=info msg="TearDown network for sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" successfully" Feb 13 15:32:59.046920 containerd[1492]: time="2025-02-13T15:32:59.046576081Z" level=info msg="StopPodSandbox for \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" returns successfully" Feb 13 15:32:59.048782 containerd[1492]: time="2025-02-13T15:32:59.048714936Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\"" Feb 13 15:32:59.049344 containerd[1492]: time="2025-02-13T15:32:59.049227260Z" level=info msg="TearDown network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" successfully" Feb 13 15:32:59.049344 containerd[1492]: time="2025-02-13T15:32:59.049251740Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" returns successfully" Feb 13 15:32:59.050932 containerd[1492]: time="2025-02-13T15:32:59.050725711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:32:59.055659 kubelet[2821]: I0213 15:32:59.055452 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e" Feb 13 15:32:59.058605 containerd[1492]: time="2025-02-13T15:32:59.057706440Z" level=info msg="StopPodSandbox for \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\"" Feb 13 15:32:59.058605 containerd[1492]: time="2025-02-13T15:32:59.058222204Z" level=info msg="Ensure that sandbox 3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e in task-service has been cleanup successfully" Feb 13 15:32:59.061722 containerd[1492]: time="2025-02-13T15:32:59.061673068Z" level=info msg="TearDown network for sandbox \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\" successfully" Feb 13 15:32:59.062702 containerd[1492]: time="2025-02-13T15:32:59.062650995Z" level=info msg="StopPodSandbox for \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\" returns successfully" Feb 13 15:32:59.063739 containerd[1492]: time="2025-02-13T15:32:59.063696202Z" level=info msg="StopPodSandbox for \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\"" Feb 13 15:32:59.063982 containerd[1492]: time="2025-02-13T15:32:59.063965044Z" level=info msg="TearDown network for sandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\" successfully" Feb 13 15:32:59.064166 containerd[1492]: time="2025-02-13T15:32:59.064150885Z" level=info msg="StopPodSandbox for \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\" returns successfully" Feb 13 15:32:59.065031 containerd[1492]: time="2025-02-13T15:32:59.065006651Z" level=info msg="StopPodSandbox for \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\"" Feb 13 15:32:59.065401 containerd[1492]: time="2025-02-13T15:32:59.065335174Z" level=info msg="TearDown network for sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\" successfully" Feb 13 15:32:59.065401 containerd[1492]: time="2025-02-13T15:32:59.065402734Z" level=info msg="StopPodSandbox for \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\" returns successfully" Feb 13 15:32:59.071042 containerd[1492]: time="2025-02-13T15:32:59.070866813Z" level=info msg="StopPodSandbox for \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\"" Feb 13 15:32:59.071042 containerd[1492]: time="2025-02-13T15:32:59.070966134Z" level=info msg="TearDown network for sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" successfully" Feb 13 15:32:59.071042 containerd[1492]: time="2025-02-13T15:32:59.070976734Z" level=info msg="StopPodSandbox for \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" returns successfully" Feb 13 15:32:59.071967 containerd[1492]: time="2025-02-13T15:32:59.071787379Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\"" Feb 13 15:32:59.071967 containerd[1492]: time="2025-02-13T15:32:59.071897780Z" level=info msg="TearDown network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" successfully" Feb 13 15:32:59.071967 containerd[1492]: time="2025-02-13T15:32:59.071907620Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" returns successfully" Feb 13 15:32:59.074307 containerd[1492]: time="2025-02-13T15:32:59.072810067Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\"" Feb 13 15:32:59.074307 containerd[1492]: time="2025-02-13T15:32:59.072906067Z" level=info msg="TearDown network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" successfully" Feb 13 15:32:59.074307 containerd[1492]: time="2025-02-13T15:32:59.072915187Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" returns successfully" Feb 13 15:32:59.074532 kubelet[2821]: I0213 15:32:59.073273 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9" Feb 13 15:32:59.074843 containerd[1492]: time="2025-02-13T15:32:59.074773960Z" level=info msg="StopPodSandbox for \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\"" Feb 13 15:32:59.075207 containerd[1492]: time="2025-02-13T15:32:59.075151123Z" level=info msg="Ensure that sandbox b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9 in task-service has been cleanup successfully" Feb 13 15:32:59.075922 containerd[1492]: time="2025-02-13T15:32:59.075865208Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\"" Feb 13 15:32:59.076292 containerd[1492]: time="2025-02-13T15:32:59.076228811Z" level=info msg="TearDown network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" successfully" Feb 13 15:32:59.076473 containerd[1492]: time="2025-02-13T15:32:59.076457372Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" returns successfully" Feb 13 15:32:59.078821 containerd[1492]: time="2025-02-13T15:32:59.078735388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:7,}" Feb 13 15:32:59.080683 containerd[1492]: time="2025-02-13T15:32:59.080006557Z" level=info msg="TearDown network for sandbox \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\" successfully" Feb 13 15:32:59.081009 containerd[1492]: time="2025-02-13T15:32:59.080870243Z" level=info msg="StopPodSandbox for \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\" returns successfully" Feb 13 15:32:59.081902 containerd[1492]: time="2025-02-13T15:32:59.081876451Z" level=info msg="StopPodSandbox for \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\"" Feb 13 15:32:59.082159 containerd[1492]: time="2025-02-13T15:32:59.082143052Z" level=info msg="TearDown network for sandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\" successfully" Feb 13 15:32:59.083758 containerd[1492]: time="2025-02-13T15:32:59.083683183Z" level=info msg="StopPodSandbox for \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\" returns successfully" Feb 13 15:32:59.084449 containerd[1492]: time="2025-02-13T15:32:59.084416709Z" level=info msg="StopPodSandbox for \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\"" Feb 13 15:32:59.085160 containerd[1492]: time="2025-02-13T15:32:59.085103193Z" level=info msg="TearDown network for sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\" successfully" Feb 13 15:32:59.085160 containerd[1492]: time="2025-02-13T15:32:59.085124514Z" level=info msg="StopPodSandbox for \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\" returns successfully" Feb 13 15:32:59.086288 containerd[1492]: time="2025-02-13T15:32:59.086138521Z" level=info msg="StopPodSandbox for \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\"" Feb 13 15:32:59.086288 containerd[1492]: time="2025-02-13T15:32:59.086230521Z" level=info msg="TearDown network for sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" successfully" Feb 13 15:32:59.086288 containerd[1492]: time="2025-02-13T15:32:59.086241081Z" level=info msg="StopPodSandbox for \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" returns successfully" Feb 13 15:32:59.087403 kubelet[2821]: I0213 15:32:59.086472 2821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11" Feb 13 15:32:59.087925 containerd[1492]: time="2025-02-13T15:32:59.087762212Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\"" Feb 13 15:32:59.088378 containerd[1492]: time="2025-02-13T15:32:59.088259456Z" level=info msg="StopPodSandbox for \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\"" Feb 13 15:32:59.088728 containerd[1492]: time="2025-02-13T15:32:59.088455817Z" level=info msg="TearDown network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" successfully" Feb 13 15:32:59.088903 containerd[1492]: time="2025-02-13T15:32:59.088886820Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" returns successfully" Feb 13 15:32:59.089747 containerd[1492]: time="2025-02-13T15:32:59.088811700Z" level=info msg="Ensure that sandbox 511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11 in task-service has been cleanup successfully" Feb 13 15:32:59.090214 containerd[1492]: time="2025-02-13T15:32:59.090183669Z" level=info msg="TearDown network for sandbox \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\" successfully" Feb 13 15:32:59.090307 containerd[1492]: time="2025-02-13T15:32:59.090273710Z" level=info msg="StopPodSandbox for \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\" returns successfully" Feb 13 15:32:59.090566 containerd[1492]: time="2025-02-13T15:32:59.090533272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:5,}" Feb 13 15:32:59.092485 containerd[1492]: time="2025-02-13T15:32:59.092237244Z" level=info msg="StopPodSandbox for \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\"" Feb 13 15:32:59.092485 containerd[1492]: time="2025-02-13T15:32:59.092396925Z" level=info msg="TearDown network for sandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\" successfully" Feb 13 15:32:59.092485 containerd[1492]: time="2025-02-13T15:32:59.092410805Z" level=info msg="StopPodSandbox for \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\" returns successfully" Feb 13 15:32:59.095413 containerd[1492]: time="2025-02-13T15:32:59.095107544Z" level=info msg="StopPodSandbox for \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\"" Feb 13 15:32:59.095680 containerd[1492]: time="2025-02-13T15:32:59.095600747Z" level=info msg="TearDown network for sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\" successfully" Feb 13 15:32:59.095680 containerd[1492]: time="2025-02-13T15:32:59.095626788Z" level=info msg="StopPodSandbox for \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\" returns successfully" Feb 13 15:32:59.097395 containerd[1492]: time="2025-02-13T15:32:59.096999237Z" level=info msg="StopPodSandbox for \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\"" Feb 13 15:32:59.097961 containerd[1492]: time="2025-02-13T15:32:59.097522881Z" level=info msg="TearDown network for sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" successfully" Feb 13 15:32:59.097961 containerd[1492]: time="2025-02-13T15:32:59.097888644Z" level=info msg="StopPodSandbox for \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" returns successfully" Feb 13 15:32:59.098736 containerd[1492]: time="2025-02-13T15:32:59.098531848Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\"" Feb 13 15:32:59.098736 containerd[1492]: time="2025-02-13T15:32:59.098661009Z" level=info msg="TearDown network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" successfully" Feb 13 15:32:59.098736 containerd[1492]: time="2025-02-13T15:32:59.098674889Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" returns successfully" Feb 13 15:32:59.099600 containerd[1492]: time="2025-02-13T15:32:59.099479615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:32:59.475121 systemd-networkd[1390]: calif011545867c: Link UP Feb 13 15:32:59.476007 systemd-networkd[1390]: calif011545867c: Gained carrier Feb 13 15:32:59.511743 kubelet[2821]: I0213 15:32:59.510752 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-lv2tx" podStartSLOduration=2.266103802 podStartE2EDuration="17.510698719s" podCreationTimestamp="2025-02-13 15:32:42 +0000 UTC" firstStartedPulling="2025-02-13 15:32:43.117542971 +0000 UTC m=+23.850458742" lastFinishedPulling="2025-02-13 15:32:58.362137888 +0000 UTC m=+39.095053659" observedRunningTime="2025-02-13 15:32:59.023259437 +0000 UTC m=+39.756175208" watchObservedRunningTime="2025-02-13 15:32:59.510698719 +0000 UTC m=+40.243614570" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.155 [INFO][4719] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.184 [INFO][4719] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0 calico-kube-controllers-7b47d5c589- calico-system 312ba18f-ce14-4faf-8d42-7109fe1d16cd 712 0 2025-02-13 15:32:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b47d5c589 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4186-1-1-6-ce8ef0549e calico-kube-controllers-7b47d5c589-xd5hs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif011545867c [] []}} ContainerID="ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" Namespace="calico-system" Pod="calico-kube-controllers-7b47d5c589-xd5hs" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.184 [INFO][4719] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" Namespace="calico-system" Pod="calico-kube-controllers-7b47d5c589-xd5hs" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.294 [INFO][4776] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" HandleID="k8s-pod-network.ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.350 [INFO][4776] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" HandleID="k8s-pod-network.ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316960), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186-1-1-6-ce8ef0549e", "pod":"calico-kube-controllers-7b47d5c589-xd5hs", "timestamp":"2025-02-13 15:32:59.294701834 +0000 UTC"}, Hostname:"ci-4186-1-1-6-ce8ef0549e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.350 [INFO][4776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.351 [INFO][4776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.351 [INFO][4776] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186-1-1-6-ce8ef0549e' Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.360 [INFO][4776] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.381 [INFO][4776] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.397 [INFO][4776] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.401 [INFO][4776] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.407 [INFO][4776] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.407 [INFO][4776] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.418 [INFO][4776] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0 Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.429 [INFO][4776] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.440 [INFO][4776] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.65/26] block=192.168.24.64/26 handle="k8s-pod-network.ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.440 [INFO][4776] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.65/26] handle="k8s-pod-network.ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.440 [INFO][4776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:59.512558 containerd[1492]: 2025-02-13 15:32:59.440 [INFO][4776] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.65/26] IPv6=[] ContainerID="ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" HandleID="k8s-pod-network.ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0" Feb 13 15:32:59.514474 containerd[1492]: 2025-02-13 15:32:59.448 [INFO][4719] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" Namespace="calico-system" Pod="calico-kube-controllers-7b47d5c589-xd5hs" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0", GenerateName:"calico-kube-controllers-7b47d5c589-", Namespace:"calico-system", SelfLink:"", UID:"312ba18f-ce14-4faf-8d42-7109fe1d16cd", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b47d5c589", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"", Pod:"calico-kube-controllers-7b47d5c589-xd5hs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif011545867c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:59.514474 containerd[1492]: 2025-02-13 15:32:59.448 [INFO][4719] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.65/32] ContainerID="ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" Namespace="calico-system" Pod="calico-kube-controllers-7b47d5c589-xd5hs" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0" Feb 13 15:32:59.514474 containerd[1492]: 2025-02-13 15:32:59.448 [INFO][4719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif011545867c ContainerID="ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" Namespace="calico-system" Pod="calico-kube-controllers-7b47d5c589-xd5hs" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0" Feb 13 15:32:59.514474 containerd[1492]: 2025-02-13 15:32:59.477 [INFO][4719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" Namespace="calico-system" Pod="calico-kube-controllers-7b47d5c589-xd5hs" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0" Feb 13 15:32:59.514474 containerd[1492]: 2025-02-13 15:32:59.485 [INFO][4719] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" Namespace="calico-system" Pod="calico-kube-controllers-7b47d5c589-xd5hs" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0", GenerateName:"calico-kube-controllers-7b47d5c589-", Namespace:"calico-system", SelfLink:"", UID:"312ba18f-ce14-4faf-8d42-7109fe1d16cd", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b47d5c589", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0", Pod:"calico-kube-controllers-7b47d5c589-xd5hs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif011545867c", MAC:"0a:75:0b:29:16:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:59.514474 containerd[1492]: 2025-02-13 15:32:59.508 [INFO][4719] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0" Namespace="calico-system" Pod="calico-kube-controllers-7b47d5c589-xd5hs" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--kube--controllers--7b47d5c589--xd5hs-eth0" Feb 13 15:32:59.555298 systemd-networkd[1390]: cali4658f6095f9: Link UP Feb 13 15:32:59.556763 systemd-networkd[1390]: cali4658f6095f9: Gained carrier Feb 13 15:32:59.588145 containerd[1492]: time="2025-02-13T15:32:59.583914596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:59.588145 containerd[1492]: time="2025-02-13T15:32:59.584217918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:59.588145 containerd[1492]: time="2025-02-13T15:32:59.584230558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.588145 containerd[1492]: time="2025-02-13T15:32:59.584400800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.596287 systemd[1]: run-netns-cni\x2de2173a05\x2d5453\x2dd28a\x2d9942\x2d9d81aaac4c60.mount: Deactivated successfully. Feb 13 15:32:59.598794 systemd[1]: run-netns-cni\x2d208e1aec\x2d7922\x2d1fcd\x2d2d5a\x2d248f8f6540b7.mount: Deactivated successfully. Feb 13 15:32:59.598881 systemd[1]: run-netns-cni\x2d3f744ecc\x2d0df7\x2da0d4\x2da06e\x2dcf5f6b203508.mount: Deactivated successfully. Feb 13 15:32:59.598943 systemd[1]: run-netns-cni\x2d1bef4c42\x2db160\x2deded\x2d527c\x2d8d887191fdc9.mount: Deactivated successfully. Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.103 [INFO][4708] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.135 [INFO][4708] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0 coredns-76f75df574- kube-system 79306bcb-17fd-459b-b782-0d95273cdb59 711 0 2025-02-13 15:32:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186-1-1-6-ce8ef0549e coredns-76f75df574-4x9md eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4658f6095f9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" Namespace="kube-system" Pod="coredns-76f75df574-4x9md" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.137 [INFO][4708] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" Namespace="kube-system" Pod="coredns-76f75df574-4x9md" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.354 [INFO][4750] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" HandleID="k8s-pod-network.583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.393 [INFO][4750] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" HandleID="k8s-pod-network.583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c3d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186-1-1-6-ce8ef0549e", "pod":"coredns-76f75df574-4x9md", "timestamp":"2025-02-13 15:32:59.354082613 +0000 UTC"}, Hostname:"ci-4186-1-1-6-ce8ef0549e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.394 [INFO][4750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.440 [INFO][4750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.441 [INFO][4750] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186-1-1-6-ce8ef0549e' Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.450 [INFO][4750] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.472 [INFO][4750] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.501 [INFO][4750] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.515 [INFO][4750] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.521 [INFO][4750] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.521 [INFO][4750] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.525 [INFO][4750] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.533 [INFO][4750] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.542 [INFO][4750] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.66/26] block=192.168.24.64/26 handle="k8s-pod-network.583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.542 [INFO][4750] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.66/26] handle="k8s-pod-network.583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.542 [INFO][4750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:59.623463 containerd[1492]: 2025-02-13 15:32:59.542 [INFO][4750] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.66/26] IPv6=[] ContainerID="583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" HandleID="k8s-pod-network.583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0" Feb 13 15:32:59.624105 containerd[1492]: 2025-02-13 15:32:59.547 [INFO][4708] cni-plugin/k8s.go 386: Populated endpoint ContainerID="583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" Namespace="kube-system" Pod="coredns-76f75df574-4x9md" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"79306bcb-17fd-459b-b782-0d95273cdb59", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"", Pod:"coredns-76f75df574-4x9md", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4658f6095f9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:59.624105 containerd[1492]: 2025-02-13 15:32:59.547 [INFO][4708] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.66/32] ContainerID="583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" Namespace="kube-system" Pod="coredns-76f75df574-4x9md" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0" Feb 13 15:32:59.624105 containerd[1492]: 2025-02-13 15:32:59.547 [INFO][4708] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4658f6095f9 ContainerID="583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" Namespace="kube-system" Pod="coredns-76f75df574-4x9md" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0" Feb 13 15:32:59.624105 containerd[1492]: 2025-02-13 15:32:59.561 [INFO][4708] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" Namespace="kube-system" Pod="coredns-76f75df574-4x9md" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0" Feb 13 15:32:59.624105 containerd[1492]: 2025-02-13 15:32:59.567 [INFO][4708] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" Namespace="kube-system" Pod="coredns-76f75df574-4x9md" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"79306bcb-17fd-459b-b782-0d95273cdb59", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a", Pod:"coredns-76f75df574-4x9md", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4658f6095f9", MAC:"a2:4f:2b:2c:90:d1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:59.624105 containerd[1492]: 2025-02-13 15:32:59.599 [INFO][4708] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a" Namespace="kube-system" Pod="coredns-76f75df574-4x9md" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--4x9md-eth0" Feb 13 15:32:59.633855 systemd[1]: Started cri-containerd-ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0.scope - libcontainer container ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0. Feb 13 15:32:59.665906 systemd-networkd[1390]: cali816594e5077: Link UP Feb 13 15:32:59.673714 systemd-networkd[1390]: cali816594e5077: Gained carrier Feb 13 15:32:59.689529 containerd[1492]: time="2025-02-13T15:32:59.688643416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:59.689529 containerd[1492]: time="2025-02-13T15:32:59.688741096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:59.689529 containerd[1492]: time="2025-02-13T15:32:59.688759937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.690587 containerd[1492]: time="2025-02-13T15:32:59.690459429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.212 [INFO][4740] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.251 [INFO][4740] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0 csi-node-driver- calico-system 548c309d-1177-42c0-887f-c4ea253c82f9 628 0 2025-02-13 15:32:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4186-1-1-6-ce8ef0549e csi-node-driver-6lfhr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali816594e5077 [] []}} ContainerID="265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" Namespace="calico-system" Pod="csi-node-driver-6lfhr" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.251 [INFO][4740] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" Namespace="calico-system" Pod="csi-node-driver-6lfhr" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.382 [INFO][4784] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" HandleID="k8s-pod-network.265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.422 [INFO][4784] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" HandleID="k8s-pod-network.265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d340), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4186-1-1-6-ce8ef0549e", "pod":"csi-node-driver-6lfhr", "timestamp":"2025-02-13 15:32:59.382701415 +0000 UTC"}, Hostname:"ci-4186-1-1-6-ce8ef0549e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.422 [INFO][4784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.543 [INFO][4784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.543 [INFO][4784] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186-1-1-6-ce8ef0549e' Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.548 [INFO][4784] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.566 [INFO][4784] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.589 [INFO][4784] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.604 [INFO][4784] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.609 [INFO][4784] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.609 [INFO][4784] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.612 [INFO][4784] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89 Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.619 [INFO][4784] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.631 [INFO][4784] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.67/26] block=192.168.24.64/26 handle="k8s-pod-network.265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.631 [INFO][4784] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.67/26] handle="k8s-pod-network.265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.631 [INFO][4784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:59.706374 containerd[1492]: 2025-02-13 15:32:59.631 [INFO][4784] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.67/26] IPv6=[] ContainerID="265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" HandleID="k8s-pod-network.265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0" Feb 13 15:32:59.711955 containerd[1492]: 2025-02-13 15:32:59.640 [INFO][4740] cni-plugin/k8s.go 386: Populated endpoint ContainerID="265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" Namespace="calico-system" Pod="csi-node-driver-6lfhr" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"548c309d-1177-42c0-887f-c4ea253c82f9", ResourceVersion:"628", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"", Pod:"csi-node-driver-6lfhr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali816594e5077", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:59.711955 containerd[1492]: 2025-02-13 15:32:59.641 [INFO][4740] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.67/32] ContainerID="265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" Namespace="calico-system" Pod="csi-node-driver-6lfhr" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0" Feb 13 15:32:59.711955 containerd[1492]: 2025-02-13 15:32:59.641 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali816594e5077 ContainerID="265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" Namespace="calico-system" Pod="csi-node-driver-6lfhr" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0" Feb 13 15:32:59.711955 containerd[1492]: 2025-02-13 15:32:59.679 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" Namespace="calico-system" Pod="csi-node-driver-6lfhr" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0" Feb 13 15:32:59.711955 containerd[1492]: 2025-02-13 15:32:59.681 [INFO][4740] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" Namespace="calico-system" Pod="csi-node-driver-6lfhr" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"548c309d-1177-42c0-887f-c4ea253c82f9", ResourceVersion:"628", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89", Pod:"csi-node-driver-6lfhr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali816594e5077", MAC:"f6:96:9f:cd:c1:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:59.711955 containerd[1492]: 2025-02-13 15:32:59.695 [INFO][4740] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89" Namespace="calico-system" Pod="csi-node-driver-6lfhr" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-csi--node--driver--6lfhr-eth0" Feb 13 15:32:59.736919 systemd[1]: run-containerd-runc-k8s.io-583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a-runc.DurP6B.mount: Deactivated successfully. Feb 13 15:32:59.745669 systemd[1]: Started cri-containerd-583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a.scope - libcontainer container 583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a. Feb 13 15:32:59.787858 containerd[1492]: time="2025-02-13T15:32:59.785449099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:59.787858 containerd[1492]: time="2025-02-13T15:32:59.785599380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:59.787858 containerd[1492]: time="2025-02-13T15:32:59.785617861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.787858 containerd[1492]: time="2025-02-13T15:32:59.786190585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.818270 systemd-networkd[1390]: cali19b09d7c35f: Link UP Feb 13 15:32:59.818734 systemd-networkd[1390]: cali19b09d7c35f: Gained carrier Feb 13 15:32:59.823651 systemd[1]: Started cri-containerd-265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89.scope - libcontainer container 265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89. Feb 13 15:32:59.843827 containerd[1492]: time="2025-02-13T15:32:59.843758431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b47d5c589-xd5hs,Uid:312ba18f-ce14-4faf-8d42-7109fe1d16cd,Namespace:calico-system,Attempt:6,} returns sandbox id \"ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0\"" Feb 13 15:32:59.851211 containerd[1492]: time="2025-02-13T15:32:59.850829081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.262 [INFO][4728] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.321 [INFO][4728] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0 calico-apiserver-78f7c5565- calico-apiserver 42f57691-268d-46e2-b88f-eb306fac4b02 713 0 2025-02-13 15:32:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78f7c5565 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186-1-1-6-ce8ef0549e calico-apiserver-78f7c5565-5c4qp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali19b09d7c35f [] []}} ContainerID="b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-5c4qp" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.321 [INFO][4728] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-5c4qp" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.441 [INFO][4797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" HandleID="k8s-pod-network.b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.489 [INFO][4797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" HandleID="k8s-pod-network.b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8540), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186-1-1-6-ce8ef0549e", "pod":"calico-apiserver-78f7c5565-5c4qp", "timestamp":"2025-02-13 15:32:59.441071267 +0000 UTC"}, Hostname:"ci-4186-1-1-6-ce8ef0549e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.493 [INFO][4797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.633 [INFO][4797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.633 [INFO][4797] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186-1-1-6-ce8ef0549e' Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.642 [INFO][4797] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.662 [INFO][4797] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.683 [INFO][4797] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.687 [INFO][4797] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.696 [INFO][4797] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.698 [INFO][4797] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.712 [INFO][4797] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2 Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.743 [INFO][4797] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.765 [INFO][4797] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.68/26] block=192.168.24.64/26 handle="k8s-pod-network.b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.765 [INFO][4797] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.68/26] handle="k8s-pod-network.b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.765 [INFO][4797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:59.853583 containerd[1492]: 2025-02-13 15:32:59.765 [INFO][4797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.68/26] IPv6=[] ContainerID="b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" HandleID="k8s-pod-network.b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0" Feb 13 15:32:59.857189 containerd[1492]: 2025-02-13 15:32:59.790 [INFO][4728] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-5c4qp" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0", GenerateName:"calico-apiserver-78f7c5565-", Namespace:"calico-apiserver", SelfLink:"", UID:"42f57691-268d-46e2-b88f-eb306fac4b02", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78f7c5565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"", Pod:"calico-apiserver-78f7c5565-5c4qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19b09d7c35f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:59.857189 containerd[1492]: 2025-02-13 15:32:59.791 [INFO][4728] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.68/32] ContainerID="b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-5c4qp" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0" Feb 13 15:32:59.857189 containerd[1492]: 2025-02-13 15:32:59.791 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19b09d7c35f ContainerID="b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-5c4qp" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0" Feb 13 15:32:59.857189 containerd[1492]: 2025-02-13 15:32:59.817 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-5c4qp" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0" Feb 13 15:32:59.857189 containerd[1492]: 2025-02-13 15:32:59.817 [INFO][4728] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-5c4qp" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0", GenerateName:"calico-apiserver-78f7c5565-", Namespace:"calico-apiserver", SelfLink:"", UID:"42f57691-268d-46e2-b88f-eb306fac4b02", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78f7c5565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2", Pod:"calico-apiserver-78f7c5565-5c4qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19b09d7c35f", MAC:"aa:97:04:b2:b2:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:59.857189 containerd[1492]: 2025-02-13 15:32:59.846 [INFO][4728] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-5c4qp" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--5c4qp-eth0" Feb 13 15:32:59.880227 systemd-networkd[1390]: calibdf46ab7666: Link UP Feb 13 15:32:59.887795 systemd-networkd[1390]: calibdf46ab7666: Gained carrier Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.291 [INFO][4752] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.336 [INFO][4752] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0 coredns-76f75df574- kube-system 13d5c8c3-cbc0-413c-8112-1a04a642e871 709 0 2025-02-13 15:32:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4186-1-1-6-ce8ef0549e coredns-76f75df574-84sk8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibdf46ab7666 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" Namespace="kube-system" Pod="coredns-76f75df574-84sk8" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.336 [INFO][4752] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" Namespace="kube-system" Pod="coredns-76f75df574-84sk8" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.449 [INFO][4803] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" HandleID="k8s-pod-network.430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.514 [INFO][4803] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" HandleID="k8s-pod-network.430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4186-1-1-6-ce8ef0549e", "pod":"coredns-76f75df574-84sk8", "timestamp":"2025-02-13 15:32:59.44994941 +0000 UTC"}, Hostname:"ci-4186-1-1-6-ce8ef0549e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.514 [INFO][4803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.766 [INFO][4803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.767 [INFO][4803] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186-1-1-6-ce8ef0549e' Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.772 [INFO][4803] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.783 [INFO][4803] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.798 [INFO][4803] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.805 [INFO][4803] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.811 [INFO][4803] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.811 [INFO][4803] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.821 [INFO][4803] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.834 [INFO][4803] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.852 [INFO][4803] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.69/26] block=192.168.24.64/26 handle="k8s-pod-network.430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.856 [INFO][4803] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.69/26] handle="k8s-pod-network.430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.856 [INFO][4803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:32:59.922537 containerd[1492]: 2025-02-13 15:32:59.856 [INFO][4803] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.69/26] IPv6=[] ContainerID="430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" HandleID="k8s-pod-network.430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0" Feb 13 15:32:59.923089 containerd[1492]: 2025-02-13 15:32:59.864 [INFO][4752] cni-plugin/k8s.go 386: Populated endpoint ContainerID="430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" Namespace="kube-system" Pod="coredns-76f75df574-84sk8" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"13d5c8c3-cbc0-413c-8112-1a04a642e871", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"", Pod:"coredns-76f75df574-84sk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibdf46ab7666", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:59.923089 containerd[1492]: 2025-02-13 15:32:59.865 [INFO][4752] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.69/32] ContainerID="430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" Namespace="kube-system" Pod="coredns-76f75df574-84sk8" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0" Feb 13 15:32:59.923089 containerd[1492]: 2025-02-13 15:32:59.865 [INFO][4752] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdf46ab7666 ContainerID="430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" Namespace="kube-system" Pod="coredns-76f75df574-84sk8" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0" Feb 13 15:32:59.923089 containerd[1492]: 2025-02-13 15:32:59.889 [INFO][4752] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" Namespace="kube-system" Pod="coredns-76f75df574-84sk8" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0" Feb 13 15:32:59.923089 containerd[1492]: 2025-02-13 15:32:59.897 [INFO][4752] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" Namespace="kube-system" Pod="coredns-76f75df574-84sk8" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"13d5c8c3-cbc0-413c-8112-1a04a642e871", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a", Pod:"coredns-76f75df574-84sk8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibdf46ab7666", MAC:"02:af:b7:1c:0f:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:32:59.923089 containerd[1492]: 2025-02-13 15:32:59.916 [INFO][4752] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a" Namespace="kube-system" Pod="coredns-76f75df574-84sk8" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-coredns--76f75df574--84sk8-eth0" Feb 13 15:32:59.931177 kubelet[2821]: E0213 15:32:59.931052 2821 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79306bcb_17fd_459b_b782_0d95273cdb59.slice/cri-containerd-583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a.scope\": RecentStats: unable to find data in memory cache]" Feb 13 15:32:59.942582 containerd[1492]: time="2025-02-13T15:32:59.940827397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6lfhr,Uid:548c309d-1177-42c0-887f-c4ea253c82f9,Namespace:calico-system,Attempt:7,} returns sandbox id \"265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89\"" Feb 13 15:32:59.942978 containerd[1492]: time="2025-02-13T15:32:59.942937212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4x9md,Uid:79306bcb-17fd-459b-b782-0d95273cdb59,Namespace:kube-system,Attempt:5,} returns sandbox id \"583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a\"" Feb 13 15:32:59.952607 containerd[1492]: time="2025-02-13T15:32:59.952552440Z" level=info msg="CreateContainer within sandbox \"583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:32:59.968192 containerd[1492]: time="2025-02-13T15:32:59.967000982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:32:59.968192 containerd[1492]: time="2025-02-13T15:32:59.967068382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:32:59.968192 containerd[1492]: time="2025-02-13T15:32:59.967084182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.968192 containerd[1492]: time="2025-02-13T15:32:59.967203303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:32:59.994703 systemd[1]: Started cri-containerd-b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2.scope - libcontainer container b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2. Feb 13 15:33:00.024690 systemd-networkd[1390]: calie00dafd295d: Link UP Feb 13 15:33:00.024965 systemd-networkd[1390]: calie00dafd295d: Gained carrier Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.334 [INFO][4764] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.393 [INFO][4764] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0 calico-apiserver-78f7c5565- calico-apiserver 945269f2-3dde-4aed-82a0-7f736010a34e 710 0 2025-02-13 15:32:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78f7c5565 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4186-1-1-6-ce8ef0549e calico-apiserver-78f7c5565-fnfv9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie00dafd295d [] []}} ContainerID="f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-fnfv9" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.393 [INFO][4764] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-fnfv9" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.503 [INFO][4809] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" HandleID="k8s-pod-network.f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.529 [INFO][4809] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" HandleID="k8s-pod-network.f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034bd70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4186-1-1-6-ce8ef0549e", "pod":"calico-apiserver-78f7c5565-fnfv9", "timestamp":"2025-02-13 15:32:59.503205026 +0000 UTC"}, Hostname:"ci-4186-1-1-6-ce8ef0549e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.530 [INFO][4809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.856 [INFO][4809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.856 [INFO][4809] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4186-1-1-6-ce8ef0549e' Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.863 [INFO][4809] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.883 [INFO][4809] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.904 [INFO][4809] ipam/ipam.go 489: Trying affinity for 192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.913 [INFO][4809] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.936 [INFO][4809] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.64/26 host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.937 [INFO][4809] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.64/26 handle="k8s-pod-network.f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.943 [INFO][4809] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:32:59.969 [INFO][4809] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.64/26 handle="k8s-pod-network.f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:33:00.001 [INFO][4809] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.70/26] block=192.168.24.64/26 handle="k8s-pod-network.f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:33:00.001 [INFO][4809] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.70/26] handle="k8s-pod-network.f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" host="ci-4186-1-1-6-ce8ef0549e" Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:33:00.001 [INFO][4809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:33:00.046135 containerd[1492]: 2025-02-13 15:33:00.001 [INFO][4809] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.70/26] IPv6=[] ContainerID="f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" HandleID="k8s-pod-network.f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" Workload="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0" Feb 13 15:33:00.047169 containerd[1492]: 2025-02-13 15:33:00.012 [INFO][4764] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-fnfv9" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0", GenerateName:"calico-apiserver-78f7c5565-", Namespace:"calico-apiserver", SelfLink:"", UID:"945269f2-3dde-4aed-82a0-7f736010a34e", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78f7c5565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"", Pod:"calico-apiserver-78f7c5565-fnfv9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie00dafd295d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:33:00.047169 containerd[1492]: 2025-02-13 15:33:00.014 [INFO][4764] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.70/32] ContainerID="f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-fnfv9" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0" Feb 13 15:33:00.047169 containerd[1492]: 2025-02-13 15:33:00.015 [INFO][4764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie00dafd295d ContainerID="f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-fnfv9" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0" Feb 13 15:33:00.047169 containerd[1492]: 2025-02-13 15:33:00.022 [INFO][4764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-fnfv9" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0" Feb 13 15:33:00.047169 containerd[1492]: 2025-02-13 15:33:00.026 [INFO][4764] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-fnfv9" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0", GenerateName:"calico-apiserver-78f7c5565-", Namespace:"calico-apiserver", SelfLink:"", UID:"945269f2-3dde-4aed-82a0-7f736010a34e", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78f7c5565", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4186-1-1-6-ce8ef0549e", ContainerID:"f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a", Pod:"calico-apiserver-78f7c5565-fnfv9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie00dafd295d", MAC:"2e:30:19:81:74:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:33:00.047169 containerd[1492]: 2025-02-13 15:33:00.042 [INFO][4764] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a" Namespace="calico-apiserver" Pod="calico-apiserver-78f7c5565-fnfv9" WorkloadEndpoint="ci--4186--1--1--6--ce8ef0549e-k8s-calico--apiserver--78f7c5565--fnfv9-eth0" Feb 13 15:33:00.052063 containerd[1492]: time="2025-02-13T15:33:00.049987845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:00.052063 containerd[1492]: time="2025-02-13T15:33:00.050149486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:00.052063 containerd[1492]: time="2025-02-13T15:33:00.050164166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:00.052063 containerd[1492]: time="2025-02-13T15:33:00.050380368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:00.070288 containerd[1492]: time="2025-02-13T15:33:00.070238907Z" level=info msg="CreateContainer within sandbox \"583eee13d171cfa67a7ec006ec0a224bb1b7dea7d2f86d7d5c8405e6f5a9f96a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5053bed8712238dedbfcdc0eee09a4f5764a4bea43a345480961cb5307fdfc56\"" Feb 13 15:33:00.073416 containerd[1492]: time="2025-02-13T15:33:00.073292968Z" level=info msg="StartContainer for \"5053bed8712238dedbfcdc0eee09a4f5764a4bea43a345480961cb5307fdfc56\"" Feb 13 15:33:00.089984 systemd[1]: Started cri-containerd-430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a.scope - libcontainer container 430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a. Feb 13 15:33:00.126757 kubelet[2821]: I0213 15:33:00.126258 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:33:00.166300 containerd[1492]: time="2025-02-13T15:33:00.165847056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:33:00.166765 containerd[1492]: time="2025-02-13T15:33:00.166576941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:33:00.166925 containerd[1492]: time="2025-02-13T15:33:00.166706622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:00.175517 containerd[1492]: time="2025-02-13T15:33:00.175141561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:33:00.213164 systemd[1]: Started cri-containerd-5053bed8712238dedbfcdc0eee09a4f5764a4bea43a345480961cb5307fdfc56.scope - libcontainer container 5053bed8712238dedbfcdc0eee09a4f5764a4bea43a345480961cb5307fdfc56. Feb 13 15:33:00.230589 systemd[1]: Started cri-containerd-f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a.scope - libcontainer container f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a. Feb 13 15:33:00.362588 containerd[1492]: time="2025-02-13T15:33:00.362491873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-5c4qp,Uid:42f57691-268d-46e2-b88f-eb306fac4b02,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2\"" Feb 13 15:33:00.406204 containerd[1492]: time="2025-02-13T15:33:00.406163419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-84sk8,Uid:13d5c8c3-cbc0-413c-8112-1a04a642e871,Namespace:kube-system,Attempt:5,} returns sandbox id \"430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a\"" Feb 13 15:33:00.417473 containerd[1492]: time="2025-02-13T15:33:00.416639332Z" level=info msg="StartContainer for \"5053bed8712238dedbfcdc0eee09a4f5764a4bea43a345480961cb5307fdfc56\" returns successfully" Feb 13 15:33:00.419025 containerd[1492]: time="2025-02-13T15:33:00.418980309Z" level=info msg="CreateContainer within sandbox \"430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:33:00.464674 containerd[1492]: time="2025-02-13T15:33:00.464625228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78f7c5565-fnfv9,Uid:945269f2-3dde-4aed-82a0-7f736010a34e,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a\"" Feb 13 15:33:00.465464 containerd[1492]: time="2025-02-13T15:33:00.465325873Z" level=info msg="CreateContainer within sandbox \"430918d466c0091d8792e0cf7cacfe0b4434fb514d0c94926946495767fe000a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1a415a2ef27fc5638916a1666196e9cbb1fabdac626ed487fc2b023f6afd6ef\"" Feb 13 15:33:00.467249 containerd[1492]: time="2025-02-13T15:33:00.467209447Z" level=info msg="StartContainer for \"d1a415a2ef27fc5638916a1666196e9cbb1fabdac626ed487fc2b023f6afd6ef\"" Feb 13 15:33:00.518610 systemd[1]: Started cri-containerd-d1a415a2ef27fc5638916a1666196e9cbb1fabdac626ed487fc2b023f6afd6ef.scope - libcontainer container d1a415a2ef27fc5638916a1666196e9cbb1fabdac626ed487fc2b023f6afd6ef. Feb 13 15:33:00.583840 containerd[1492]: time="2025-02-13T15:33:00.583548661Z" level=info msg="StartContainer for \"d1a415a2ef27fc5638916a1666196e9cbb1fabdac626ed487fc2b023f6afd6ef\" returns successfully" Feb 13 15:33:00.836473 systemd-networkd[1390]: cali19b09d7c35f: Gained IPv6LL Feb 13 15:33:00.839582 kernel: bpftool[5333]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:33:00.900403 systemd-networkd[1390]: cali4658f6095f9: Gained IPv6LL Feb 13 15:33:01.060917 systemd-networkd[1390]: vxlan.calico: Link UP Feb 13 15:33:01.060934 systemd-networkd[1390]: vxlan.calico: Gained carrier Feb 13 15:33:01.180482 kubelet[2821]: I0213 15:33:01.178388 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4x9md" podStartSLOduration=27.178333816 podStartE2EDuration="27.178333816s" podCreationTimestamp="2025-02-13 15:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:01.177302209 +0000 UTC m=+41.910218860" watchObservedRunningTime="2025-02-13 15:33:01.178333816 +0000 UTC m=+41.911249587" Feb 13 15:33:01.217303 kubelet[2821]: I0213 15:33:01.217073 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-84sk8" podStartSLOduration=27.217032725 podStartE2EDuration="27.217032725s" podCreationTimestamp="2025-02-13 15:32:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:33:01.197047106 +0000 UTC m=+41.929962957" watchObservedRunningTime="2025-02-13 15:33:01.217032725 +0000 UTC m=+41.949948496" Feb 13 15:33:01.348695 systemd-networkd[1390]: cali816594e5077: Gained IPv6LL Feb 13 15:33:01.413580 systemd-networkd[1390]: calif011545867c: Gained IPv6LL Feb 13 15:33:01.604726 systemd-networkd[1390]: calie00dafd295d: Gained IPv6LL Feb 13 15:33:01.731667 systemd-networkd[1390]: calibdf46ab7666: Gained IPv6LL Feb 13 15:33:02.435895 systemd-networkd[1390]: vxlan.calico: Gained IPv6LL Feb 13 15:33:03.399037 containerd[1492]: time="2025-02-13T15:33:03.398959580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:03.401001 containerd[1492]: time="2025-02-13T15:33:03.400878033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 15:33:03.402675 containerd[1492]: time="2025-02-13T15:33:03.402266683Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:03.405228 containerd[1492]: time="2025-02-13T15:33:03.405166863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:03.406249 containerd[1492]: time="2025-02-13T15:33:03.406208390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.555260708s" Feb 13 15:33:03.406441 containerd[1492]: time="2025-02-13T15:33:03.406420551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 15:33:03.410087 containerd[1492]: time="2025-02-13T15:33:03.410045296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:33:03.444690 containerd[1492]: time="2025-02-13T15:33:03.444227890Z" level=info msg="CreateContainer within sandbox \"ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:33:03.462703 kubelet[2821]: I0213 15:33:03.462154 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:33:03.490452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479069262.mount: Deactivated successfully. Feb 13 15:33:03.491217 containerd[1492]: time="2025-02-13T15:33:03.490705368Z" level=info msg="CreateContainer within sandbox \"ec20976738944f17559af8553e64b927c0d0b4ce1a724b08c40527038d8232b0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e69be219710439a253a6974aaf2d273e229560cb41ced0a2addc71f90019d8c8\"" Feb 13 15:33:03.494620 containerd[1492]: time="2025-02-13T15:33:03.492272538Z" level=info msg="StartContainer for \"e69be219710439a253a6974aaf2d273e229560cb41ced0a2addc71f90019d8c8\"" Feb 13 15:33:03.534671 systemd[1]: Started cri-containerd-e69be219710439a253a6974aaf2d273e229560cb41ced0a2addc71f90019d8c8.scope - libcontainer container e69be219710439a253a6974aaf2d273e229560cb41ced0a2addc71f90019d8c8. Feb 13 15:33:03.617000 containerd[1492]: time="2025-02-13T15:33:03.616918630Z" level=info msg="StartContainer for \"e69be219710439a253a6974aaf2d273e229560cb41ced0a2addc71f90019d8c8\" returns successfully" Feb 13 15:33:04.257970 kubelet[2821]: I0213 15:33:04.257890 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b47d5c589-xd5hs" podStartSLOduration=17.700337434 podStartE2EDuration="21.257833038s" podCreationTimestamp="2025-02-13 15:32:43 +0000 UTC" firstStartedPulling="2025-02-13 15:32:59.849691953 +0000 UTC m=+40.582607724" lastFinishedPulling="2025-02-13 15:33:03.407187557 +0000 UTC m=+44.140103328" observedRunningTime="2025-02-13 15:33:04.206147888 +0000 UTC m=+44.939063619" watchObservedRunningTime="2025-02-13 15:33:04.257833038 +0000 UTC m=+44.990748929" Feb 13 15:33:05.108321 containerd[1492]: time="2025-02-13T15:33:05.108253282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:05.110010 containerd[1492]: time="2025-02-13T15:33:05.109806892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 15:33:05.111378 containerd[1492]: time="2025-02-13T15:33:05.111047660Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:05.113763 containerd[1492]: time="2025-02-13T15:33:05.113719758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:05.114790 containerd[1492]: time="2025-02-13T15:33:05.114706685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.704615149s" Feb 13 15:33:05.114790 containerd[1492]: time="2025-02-13T15:33:05.114770766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 15:33:05.116859 containerd[1492]: time="2025-02-13T15:33:05.115886293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:33:05.119230 containerd[1492]: time="2025-02-13T15:33:05.119154195Z" level=info msg="CreateContainer within sandbox \"265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:33:05.159863 containerd[1492]: time="2025-02-13T15:33:05.159761948Z" level=info msg="CreateContainer within sandbox \"265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ae20ff08d3f51ce5979c00b411be9fdbee038f15af1ba3c5c42bc2e521466772\"" Feb 13 15:33:05.161800 containerd[1492]: time="2025-02-13T15:33:05.160751115Z" level=info msg="StartContainer for \"ae20ff08d3f51ce5979c00b411be9fdbee038f15af1ba3c5c42bc2e521466772\"" Feb 13 15:33:05.209725 systemd[1]: Started cri-containerd-ae20ff08d3f51ce5979c00b411be9fdbee038f15af1ba3c5c42bc2e521466772.scope - libcontainer container ae20ff08d3f51ce5979c00b411be9fdbee038f15af1ba3c5c42bc2e521466772. Feb 13 15:33:05.249879 containerd[1492]: time="2025-02-13T15:33:05.249828035Z" level=info msg="StartContainer for \"ae20ff08d3f51ce5979c00b411be9fdbee038f15af1ba3c5c42bc2e521466772\" returns successfully" Feb 13 15:33:07.470039 containerd[1492]: time="2025-02-13T15:33:07.468846601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:07.471015 containerd[1492]: time="2025-02-13T15:33:07.470934495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 15:33:07.471337 containerd[1492]: time="2025-02-13T15:33:07.471296057Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:07.475617 containerd[1492]: time="2025-02-13T15:33:07.475553606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:07.476487 containerd[1492]: time="2025-02-13T15:33:07.476432851Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.360501038s" Feb 13 15:33:07.476671 containerd[1492]: time="2025-02-13T15:33:07.476649053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:33:07.478286 containerd[1492]: time="2025-02-13T15:33:07.478014342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:33:07.480669 containerd[1492]: time="2025-02-13T15:33:07.480613319Z" level=info msg="CreateContainer within sandbox \"b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:33:07.502808 containerd[1492]: time="2025-02-13T15:33:07.502673026Z" level=info msg="CreateContainer within sandbox \"b788d082bacd8b78488be0e03ac682f406eba1f19b1a2699384a8a1ef35253d2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d1169fb19423970994ac440f3d02b0c11328ef52b5c610a1b8efca4ac798d8cf\"" Feb 13 15:33:07.505139 containerd[1492]: time="2025-02-13T15:33:07.505094642Z" level=info msg="StartContainer for \"d1169fb19423970994ac440f3d02b0c11328ef52b5c610a1b8efca4ac798d8cf\"" Feb 13 15:33:07.543608 systemd[1]: Started cri-containerd-d1169fb19423970994ac440f3d02b0c11328ef52b5c610a1b8efca4ac798d8cf.scope - libcontainer container d1169fb19423970994ac440f3d02b0c11328ef52b5c610a1b8efca4ac798d8cf. Feb 13 15:33:07.586926 containerd[1492]: time="2025-02-13T15:33:07.586835384Z" level=info msg="StartContainer for \"d1169fb19423970994ac440f3d02b0c11328ef52b5c610a1b8efca4ac798d8cf\" returns successfully" Feb 13 15:33:07.981329 containerd[1492]: time="2025-02-13T15:33:07.980512437Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:07.981635 containerd[1492]: time="2025-02-13T15:33:07.981594484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:33:07.983933 containerd[1492]: time="2025-02-13T15:33:07.983881059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 505.811437ms" Feb 13 15:33:07.983933 containerd[1492]: time="2025-02-13T15:33:07.983929139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:33:07.985184 containerd[1492]: time="2025-02-13T15:33:07.984592064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:33:07.988439 containerd[1492]: time="2025-02-13T15:33:07.988396409Z" level=info msg="CreateContainer within sandbox \"f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:33:08.009116 containerd[1492]: time="2025-02-13T15:33:08.008729464Z" level=info msg="CreateContainer within sandbox \"f903404a4b52043d5c3acfd968afc36da68e97ac1ce90fbb598dcc5949e0e35a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"af2c8781912d90f36780c47dca0bf0c657680188fc984d78d271f6c5250b0564\"" Feb 13 15:33:08.011365 containerd[1492]: time="2025-02-13T15:33:08.011313641Z" level=info msg="StartContainer for \"af2c8781912d90f36780c47dca0bf0c657680188fc984d78d271f6c5250b0564\"" Feb 13 15:33:08.041622 systemd[1]: Started cri-containerd-af2c8781912d90f36780c47dca0bf0c657680188fc984d78d271f6c5250b0564.scope - libcontainer container af2c8781912d90f36780c47dca0bf0c657680188fc984d78d271f6c5250b0564. Feb 13 15:33:08.212094 containerd[1492]: time="2025-02-13T15:33:08.211766722Z" level=info msg="StartContainer for \"af2c8781912d90f36780c47dca0bf0c657680188fc984d78d271f6c5250b0564\" returns successfully" Feb 13 15:33:08.247016 kubelet[2821]: I0213 15:33:08.246826 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78f7c5565-5c4qp" podStartSLOduration=21.161049692 podStartE2EDuration="28.246774872s" podCreationTimestamp="2025-02-13 15:32:40 +0000 UTC" firstStartedPulling="2025-02-13 15:33:00.391301475 +0000 UTC m=+41.124217246" lastFinishedPulling="2025-02-13 15:33:07.477026655 +0000 UTC m=+48.209942426" observedRunningTime="2025-02-13 15:33:08.245686465 +0000 UTC m=+48.978602316" watchObservedRunningTime="2025-02-13 15:33:08.246774872 +0000 UTC m=+48.979690643" Feb 13 15:33:09.235252 kubelet[2821]: I0213 15:33:09.234969 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:33:10.238943 kubelet[2821]: I0213 15:33:10.238212 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:33:10.923736 containerd[1492]: time="2025-02-13T15:33:10.923629068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:10.936410 containerd[1492]: time="2025-02-13T15:33:10.935625826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 15:33:10.936410 containerd[1492]: time="2025-02-13T15:33:10.935746547Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:10.940886 containerd[1492]: time="2025-02-13T15:33:10.940830260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:33:10.941671 containerd[1492]: time="2025-02-13T15:33:10.941548905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 2.956913241s" Feb 13 15:33:10.941671 containerd[1492]: time="2025-02-13T15:33:10.941663665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 15:33:10.946268 containerd[1492]: time="2025-02-13T15:33:10.946200295Z" level=info msg="CreateContainer within sandbox \"265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:33:10.978772 containerd[1492]: time="2025-02-13T15:33:10.978715266Z" level=info msg="CreateContainer within sandbox \"265ac922555b62640febe25d2599dffbdffcc73a05ab49e50131479ab0c63c89\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"328ec28b12d8dd64dd5c82d25969304416ecb53cf5a825ddcbcfd7eb6c251bd8\"" Feb 13 15:33:10.979538 containerd[1492]: time="2025-02-13T15:33:10.979488551Z" level=info msg="StartContainer for \"328ec28b12d8dd64dd5c82d25969304416ecb53cf5a825ddcbcfd7eb6c251bd8\"" Feb 13 15:33:11.032779 systemd[1]: Started cri-containerd-328ec28b12d8dd64dd5c82d25969304416ecb53cf5a825ddcbcfd7eb6c251bd8.scope - libcontainer container 328ec28b12d8dd64dd5c82d25969304416ecb53cf5a825ddcbcfd7eb6c251bd8. Feb 13 15:33:11.117042 containerd[1492]: time="2025-02-13T15:33:11.116882000Z" level=info msg="StartContainer for \"328ec28b12d8dd64dd5c82d25969304416ecb53cf5a825ddcbcfd7eb6c251bd8\" returns successfully" Feb 13 15:33:11.268317 kubelet[2821]: I0213 15:33:11.268045 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78f7c5565-fnfv9" podStartSLOduration=23.755169911 podStartE2EDuration="31.267934496s" podCreationTimestamp="2025-02-13 15:32:40 +0000 UTC" firstStartedPulling="2025-02-13 15:33:00.471608397 +0000 UTC m=+41.204524168" lastFinishedPulling="2025-02-13 15:33:07.984372982 +0000 UTC m=+48.717288753" observedRunningTime="2025-02-13 15:33:09.261669149 +0000 UTC m=+49.994584960" watchObservedRunningTime="2025-02-13 15:33:11.267934496 +0000 UTC m=+52.000850547" Feb 13 15:33:11.270706 kubelet[2821]: I0213 15:33:11.270666 2821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-6lfhr" podStartSLOduration=18.27341011 podStartE2EDuration="29.270612833s" podCreationTimestamp="2025-02-13 15:32:42 +0000 UTC" firstStartedPulling="2025-02-13 15:32:59.945023066 +0000 UTC m=+40.677938797" lastFinishedPulling="2025-02-13 15:33:10.942225749 +0000 UTC m=+51.675141520" observedRunningTime="2025-02-13 15:33:11.269412505 +0000 UTC m=+52.002328316" watchObservedRunningTime="2025-02-13 15:33:11.270612833 +0000 UTC m=+52.003528644" Feb 13 15:33:11.596065 kubelet[2821]: I0213 15:33:11.595205 2821 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:33:11.596065 kubelet[2821]: I0213 15:33:11.595286 2821 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:33:19.450929 containerd[1492]: time="2025-02-13T15:33:19.450179201Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\"" Feb 13 15:33:19.452421 containerd[1492]: time="2025-02-13T15:33:19.451003926Z" level=info msg="TearDown network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" successfully" Feb 13 15:33:19.452421 containerd[1492]: time="2025-02-13T15:33:19.451028566Z" level=info msg="StopPodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" returns successfully" Feb 13 15:33:19.452421 containerd[1492]: time="2025-02-13T15:33:19.452107693Z" level=info msg="RemovePodSandbox for \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\"" Feb 13 15:33:19.452421 containerd[1492]: time="2025-02-13T15:33:19.452141653Z" level=info msg="Forcibly stopping sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\"" Feb 13 15:33:19.452421 containerd[1492]: time="2025-02-13T15:33:19.452227534Z" level=info msg="TearDown network for sandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" successfully" Feb 13 15:33:19.458848 containerd[1492]: time="2025-02-13T15:33:19.458484572Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.458848 containerd[1492]: time="2025-02-13T15:33:19.458641813Z" level=info msg="RemovePodSandbox \"62b59f3122fe87ba18ef207096d0e66cd37990b1c2c0b77a0f236fccdbae97af\" returns successfully" Feb 13 15:33:19.459561 containerd[1492]: time="2025-02-13T15:33:19.459319217Z" level=info msg="StopPodSandbox for \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\"" Feb 13 15:33:19.459561 containerd[1492]: time="2025-02-13T15:33:19.459476298Z" level=info msg="TearDown network for sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" successfully" Feb 13 15:33:19.459561 containerd[1492]: time="2025-02-13T15:33:19.459490458Z" level=info msg="StopPodSandbox for \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" returns successfully" Feb 13 15:33:19.460149 containerd[1492]: time="2025-02-13T15:33:19.460122142Z" level=info msg="RemovePodSandbox for \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\"" Feb 13 15:33:19.460149 containerd[1492]: time="2025-02-13T15:33:19.460155022Z" level=info msg="Forcibly stopping sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\"" Feb 13 15:33:19.460238 containerd[1492]: time="2025-02-13T15:33:19.460230863Z" level=info msg="TearDown network for sandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" successfully" Feb 13 15:33:19.464480 containerd[1492]: time="2025-02-13T15:33:19.464432529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.464480 containerd[1492]: time="2025-02-13T15:33:19.464515729Z" level=info msg="RemovePodSandbox \"a0de3e6111904299e38fdd84bd163166f86bf8ca1de35e70b0977bc59aaec813\" returns successfully" Feb 13 15:33:19.466058 containerd[1492]: time="2025-02-13T15:33:19.465691617Z" level=info msg="StopPodSandbox for \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\"" Feb 13 15:33:19.466058 containerd[1492]: time="2025-02-13T15:33:19.465868058Z" level=info msg="TearDown network for sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\" successfully" Feb 13 15:33:19.466058 containerd[1492]: time="2025-02-13T15:33:19.465881458Z" level=info msg="StopPodSandbox for \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\" returns successfully" Feb 13 15:33:19.466910 containerd[1492]: time="2025-02-13T15:33:19.466850344Z" level=info msg="RemovePodSandbox for \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\"" Feb 13 15:33:19.467006 containerd[1492]: time="2025-02-13T15:33:19.466985065Z" level=info msg="Forcibly stopping sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\"" Feb 13 15:33:19.468397 containerd[1492]: time="2025-02-13T15:33:19.467207186Z" level=info msg="TearDown network for sandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\" successfully" Feb 13 15:33:19.474949 containerd[1492]: time="2025-02-13T15:33:19.474870033Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.474949 containerd[1492]: time="2025-02-13T15:33:19.474958634Z" level=info msg="RemovePodSandbox \"47142b8aa73b65464e6d0f99ac61117235ccd8540a0d1aa0142333b319ce67a8\" returns successfully" Feb 13 15:33:19.476482 containerd[1492]: time="2025-02-13T15:33:19.475950320Z" level=info msg="StopPodSandbox for \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\"" Feb 13 15:33:19.476482 containerd[1492]: time="2025-02-13T15:33:19.476129481Z" level=info msg="TearDown network for sandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\" successfully" Feb 13 15:33:19.476482 containerd[1492]: time="2025-02-13T15:33:19.476151321Z" level=info msg="StopPodSandbox for \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\" returns successfully" Feb 13 15:33:19.478401 containerd[1492]: time="2025-02-13T15:33:19.477254048Z" level=info msg="RemovePodSandbox for \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\"" Feb 13 15:33:19.478401 containerd[1492]: time="2025-02-13T15:33:19.477332808Z" level=info msg="Forcibly stopping sandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\"" Feb 13 15:33:19.478401 containerd[1492]: time="2025-02-13T15:33:19.477666050Z" level=info msg="TearDown network for sandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\" successfully" Feb 13 15:33:19.485807 containerd[1492]: time="2025-02-13T15:33:19.485681780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.485807 containerd[1492]: time="2025-02-13T15:33:19.485824341Z" level=info msg="RemovePodSandbox \"6ad27c13e83cf2509785c19496e5f851d5356672db2718f7106a2592e42114b2\" returns successfully" Feb 13 15:33:19.487076 containerd[1492]: time="2025-02-13T15:33:19.486626666Z" level=info msg="StopPodSandbox for \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\"" Feb 13 15:33:19.487076 containerd[1492]: time="2025-02-13T15:33:19.486805867Z" level=info msg="TearDown network for sandbox \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\" successfully" Feb 13 15:33:19.487076 containerd[1492]: time="2025-02-13T15:33:19.486819907Z" level=info msg="StopPodSandbox for \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\" returns successfully" Feb 13 15:33:19.487740 containerd[1492]: time="2025-02-13T15:33:19.487533711Z" level=info msg="RemovePodSandbox for \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\"" Feb 13 15:33:19.487740 containerd[1492]: time="2025-02-13T15:33:19.487574152Z" level=info msg="Forcibly stopping sandbox \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\"" Feb 13 15:33:19.487843 containerd[1492]: time="2025-02-13T15:33:19.487741753Z" level=info msg="TearDown network for sandbox \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\" successfully" Feb 13 15:33:19.491925 containerd[1492]: time="2025-02-13T15:33:19.491761177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.491925 containerd[1492]: time="2025-02-13T15:33:19.491854138Z" level=info msg="RemovePodSandbox \"511418bbb6d2267bc4ac95e949177986bccf2e2617e8644a8f8bb1a555375c11\" returns successfully" Feb 13 15:33:19.493858 containerd[1492]: time="2025-02-13T15:33:19.493311267Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\"" Feb 13 15:33:19.493858 containerd[1492]: time="2025-02-13T15:33:19.493563429Z" level=info msg="TearDown network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" successfully" Feb 13 15:33:19.493858 containerd[1492]: time="2025-02-13T15:33:19.493609949Z" level=info msg="StopPodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" returns successfully" Feb 13 15:33:19.495710 containerd[1492]: time="2025-02-13T15:33:19.495546041Z" level=info msg="RemovePodSandbox for \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\"" Feb 13 15:33:19.496087 containerd[1492]: time="2025-02-13T15:33:19.495823522Z" level=info msg="Forcibly stopping sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\"" Feb 13 15:33:19.496875 containerd[1492]: time="2025-02-13T15:33:19.496528647Z" level=info msg="TearDown network for sandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" successfully" Feb 13 15:33:19.503006 containerd[1492]: time="2025-02-13T15:33:19.502871766Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.503006 containerd[1492]: time="2025-02-13T15:33:19.502964247Z" level=info msg="RemovePodSandbox \"935ec52b9961e9b68feffbf0c174857cc6366d84ce68d3300a91ca046adc4ea0\" returns successfully" Feb 13 15:33:19.504203 containerd[1492]: time="2025-02-13T15:33:19.503808932Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\"" Feb 13 15:33:19.504203 containerd[1492]: time="2025-02-13T15:33:19.503931373Z" level=info msg="TearDown network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" successfully" Feb 13 15:33:19.504203 containerd[1492]: time="2025-02-13T15:33:19.503943973Z" level=info msg="StopPodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" returns successfully" Feb 13 15:33:19.504850 containerd[1492]: time="2025-02-13T15:33:19.504806498Z" level=info msg="RemovePodSandbox for \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\"" Feb 13 15:33:19.504909 containerd[1492]: time="2025-02-13T15:33:19.504863218Z" level=info msg="Forcibly stopping sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\"" Feb 13 15:33:19.505028 containerd[1492]: time="2025-02-13T15:33:19.505005539Z" level=info msg="TearDown network for sandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" successfully" Feb 13 15:33:19.529509 containerd[1492]: time="2025-02-13T15:33:19.529444330Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.529720 containerd[1492]: time="2025-02-13T15:33:19.529555731Z" level=info msg="RemovePodSandbox \"1b48c9992209ef9919f664a28c8f1e27e20c327d7d11b5f524ca0c09b2bdb82e\" returns successfully" Feb 13 15:33:19.531143 containerd[1492]: time="2025-02-13T15:33:19.531067140Z" level=info msg="StopPodSandbox for \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\"" Feb 13 15:33:19.531625 containerd[1492]: time="2025-02-13T15:33:19.531470222Z" level=info msg="TearDown network for sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" successfully" Feb 13 15:33:19.531625 containerd[1492]: time="2025-02-13T15:33:19.531496703Z" level=info msg="StopPodSandbox for \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" returns successfully" Feb 13 15:33:19.532134 containerd[1492]: time="2025-02-13T15:33:19.532079066Z" level=info msg="RemovePodSandbox for \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\"" Feb 13 15:33:19.532134 containerd[1492]: time="2025-02-13T15:33:19.532125346Z" level=info msg="Forcibly stopping sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\"" Feb 13 15:33:19.532299 containerd[1492]: time="2025-02-13T15:33:19.532256387Z" level=info msg="TearDown network for sandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" successfully" Feb 13 15:33:19.536926 containerd[1492]: time="2025-02-13T15:33:19.536865936Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.537407 containerd[1492]: time="2025-02-13T15:33:19.536948056Z" level=info msg="RemovePodSandbox \"03c315f420c36b808ac08895d0c9a2dcfef0d03263e86d711807069700ed3d8c\" returns successfully" Feb 13 15:33:19.538218 containerd[1492]: time="2025-02-13T15:33:19.537958022Z" level=info msg="StopPodSandbox for \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\"" Feb 13 15:33:19.538218 containerd[1492]: time="2025-02-13T15:33:19.538095823Z" level=info msg="TearDown network for sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\" successfully" Feb 13 15:33:19.538218 containerd[1492]: time="2025-02-13T15:33:19.538107783Z" level=info msg="StopPodSandbox for \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\" returns successfully" Feb 13 15:33:19.539392 containerd[1492]: time="2025-02-13T15:33:19.538891428Z" level=info msg="RemovePodSandbox for \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\"" Feb 13 15:33:19.539392 containerd[1492]: time="2025-02-13T15:33:19.539026469Z" level=info msg="Forcibly stopping sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\"" Feb 13 15:33:19.539392 containerd[1492]: time="2025-02-13T15:33:19.539166110Z" level=info msg="TearDown network for sandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\" successfully" Feb 13 15:33:19.543758 containerd[1492]: time="2025-02-13T15:33:19.543580817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.543758 containerd[1492]: time="2025-02-13T15:33:19.543743658Z" level=info msg="RemovePodSandbox \"e096267fe9de3ed341b98be1986f030e662dcbba5c1d920a0bff054f91dd96d2\" returns successfully" Feb 13 15:33:19.544417 containerd[1492]: time="2025-02-13T15:33:19.544343702Z" level=info msg="StopPodSandbox for \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\"" Feb 13 15:33:19.544504 containerd[1492]: time="2025-02-13T15:33:19.544488583Z" level=info msg="TearDown network for sandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\" successfully" Feb 13 15:33:19.544504 containerd[1492]: time="2025-02-13T15:33:19.544499223Z" level=info msg="StopPodSandbox for \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\" returns successfully" Feb 13 15:33:19.545112 containerd[1492]: time="2025-02-13T15:33:19.545067826Z" level=info msg="RemovePodSandbox for \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\"" Feb 13 15:33:19.545112 containerd[1492]: time="2025-02-13T15:33:19.545102347Z" level=info msg="Forcibly stopping sandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\"" Feb 13 15:33:19.545274 containerd[1492]: time="2025-02-13T15:33:19.545179147Z" level=info msg="TearDown network for sandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\" successfully" Feb 13 15:33:19.549975 containerd[1492]: time="2025-02-13T15:33:19.549870376Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.549975 containerd[1492]: time="2025-02-13T15:33:19.549984577Z" level=info msg="RemovePodSandbox \"59044dbbfcb70d8cda39891e2be6cfa5421a3b10947e289f8567a39180be6968\" returns successfully" Feb 13 15:33:19.550772 containerd[1492]: time="2025-02-13T15:33:19.550666701Z" level=info msg="StopPodSandbox for \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\"" Feb 13 15:33:19.553622 containerd[1492]: time="2025-02-13T15:33:19.550789742Z" level=info msg="TearDown network for sandbox \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\" successfully" Feb 13 15:33:19.553622 containerd[1492]: time="2025-02-13T15:33:19.550800822Z" level=info msg="StopPodSandbox for \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\" returns successfully" Feb 13 15:33:19.553622 containerd[1492]: time="2025-02-13T15:33:19.551325265Z" level=info msg="RemovePodSandbox for \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\"" Feb 13 15:33:19.553622 containerd[1492]: time="2025-02-13T15:33:19.551423746Z" level=info msg="Forcibly stopping sandbox \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\"" Feb 13 15:33:19.553622 containerd[1492]: time="2025-02-13T15:33:19.551774828Z" level=info msg="TearDown network for sandbox \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\" successfully" Feb 13 15:33:19.557273 containerd[1492]: time="2025-02-13T15:33:19.557170741Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.557273 containerd[1492]: time="2025-02-13T15:33:19.557252221Z" level=info msg="RemovePodSandbox \"6591c42ecc62ccc7556a3e69c2e828fa059b1a951b01a26faa830d5247c1ca60\" returns successfully" Feb 13 15:33:19.559849 containerd[1492]: time="2025-02-13T15:33:19.559380595Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\"" Feb 13 15:33:19.559849 containerd[1492]: time="2025-02-13T15:33:19.559509315Z" level=info msg="TearDown network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" successfully" Feb 13 15:33:19.559849 containerd[1492]: time="2025-02-13T15:33:19.559524316Z" level=info msg="StopPodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" returns successfully" Feb 13 15:33:19.562414 containerd[1492]: time="2025-02-13T15:33:19.561429167Z" level=info msg="RemovePodSandbox for \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\"" Feb 13 15:33:19.562414 containerd[1492]: time="2025-02-13T15:33:19.561510288Z" level=info msg="Forcibly stopping sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\"" Feb 13 15:33:19.562637 containerd[1492]: time="2025-02-13T15:33:19.562461934Z" level=info msg="TearDown network for sandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" successfully" Feb 13 15:33:19.573672 containerd[1492]: time="2025-02-13T15:33:19.573617442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.574000 containerd[1492]: time="2025-02-13T15:33:19.573971605Z" level=info msg="RemovePodSandbox \"d77b91b27564eb6b8c3340ab008325d5a04823ebf2a6c703187e3f85e857445a\" returns successfully" Feb 13 15:33:19.578015 containerd[1492]: time="2025-02-13T15:33:19.577959869Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\"" Feb 13 15:33:19.578173 containerd[1492]: time="2025-02-13T15:33:19.578131790Z" level=info msg="TearDown network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" successfully" Feb 13 15:33:19.578173 containerd[1492]: time="2025-02-13T15:33:19.578143630Z" level=info msg="StopPodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" returns successfully" Feb 13 15:33:19.578668 containerd[1492]: time="2025-02-13T15:33:19.578633073Z" level=info msg="RemovePodSandbox for \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\"" Feb 13 15:33:19.578668 containerd[1492]: time="2025-02-13T15:33:19.578671234Z" level=info msg="Forcibly stopping sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\"" Feb 13 15:33:19.578834 containerd[1492]: time="2025-02-13T15:33:19.578785434Z" level=info msg="TearDown network for sandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" successfully" Feb 13 15:33:19.584044 containerd[1492]: time="2025-02-13T15:33:19.583903546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.584387 containerd[1492]: time="2025-02-13T15:33:19.584311508Z" level=info msg="RemovePodSandbox \"7fcffc93790ddc27ee579a3698a221685758b96b17ce62f34c1ae1072d715ad2\" returns successfully" Feb 13 15:33:19.585155 containerd[1492]: time="2025-02-13T15:33:19.585110993Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\"" Feb 13 15:33:19.585302 containerd[1492]: time="2025-02-13T15:33:19.585247074Z" level=info msg="TearDown network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" successfully" Feb 13 15:33:19.585302 containerd[1492]: time="2025-02-13T15:33:19.585259034Z" level=info msg="StopPodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" returns successfully" Feb 13 15:33:19.586048 containerd[1492]: time="2025-02-13T15:33:19.586011479Z" level=info msg="RemovePodSandbox for \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\"" Feb 13 15:33:19.586048 containerd[1492]: time="2025-02-13T15:33:19.586046919Z" level=info msg="Forcibly stopping sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\"" Feb 13 15:33:19.586287 containerd[1492]: time="2025-02-13T15:33:19.586131440Z" level=info msg="TearDown network for sandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" successfully" Feb 13 15:33:19.590343 containerd[1492]: time="2025-02-13T15:33:19.590258425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.590733 containerd[1492]: time="2025-02-13T15:33:19.590392506Z" level=info msg="RemovePodSandbox \"c6fdc8ee94bd5c118092994719cfb1b211141f0437e433269d980e56ccb1e139\" returns successfully" Feb 13 15:33:19.592205 containerd[1492]: time="2025-02-13T15:33:19.591569673Z" level=info msg="StopPodSandbox for \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\"" Feb 13 15:33:19.592205 containerd[1492]: time="2025-02-13T15:33:19.591794995Z" level=info msg="TearDown network for sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" successfully" Feb 13 15:33:19.592205 containerd[1492]: time="2025-02-13T15:33:19.591816915Z" level=info msg="StopPodSandbox for \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" returns successfully" Feb 13 15:33:19.593125 containerd[1492]: time="2025-02-13T15:33:19.592993202Z" level=info msg="RemovePodSandbox for \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\"" Feb 13 15:33:19.593125 containerd[1492]: time="2025-02-13T15:33:19.593036322Z" level=info msg="Forcibly stopping sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\"" Feb 13 15:33:19.593277 containerd[1492]: time="2025-02-13T15:33:19.593171363Z" level=info msg="TearDown network for sandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" successfully" Feb 13 15:33:19.597514 containerd[1492]: time="2025-02-13T15:33:19.597408829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.597514 containerd[1492]: time="2025-02-13T15:33:19.597494390Z" level=info msg="RemovePodSandbox \"a8fdb4c84f6eacfb35e8ea378f0b7e9245f7f0116dfcd05cfd32744debc28e8f\" returns successfully" Feb 13 15:33:19.598433 containerd[1492]: time="2025-02-13T15:33:19.598110154Z" level=info msg="StopPodSandbox for \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\"" Feb 13 15:33:19.598433 containerd[1492]: time="2025-02-13T15:33:19.598232674Z" level=info msg="TearDown network for sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\" successfully" Feb 13 15:33:19.598433 containerd[1492]: time="2025-02-13T15:33:19.598245514Z" level=info msg="StopPodSandbox for \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\" returns successfully" Feb 13 15:33:19.599343 containerd[1492]: time="2025-02-13T15:33:19.599269401Z" level=info msg="RemovePodSandbox for \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\"" Feb 13 15:33:19.599343 containerd[1492]: time="2025-02-13T15:33:19.599328441Z" level=info msg="Forcibly stopping sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\"" Feb 13 15:33:19.599659 containerd[1492]: time="2025-02-13T15:33:19.599445962Z" level=info msg="TearDown network for sandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\" successfully" Feb 13 15:33:19.604689 containerd[1492]: time="2025-02-13T15:33:19.604613794Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.605268 containerd[1492]: time="2025-02-13T15:33:19.604705594Z" level=info msg="RemovePodSandbox \"2002627cf4928ec2d25438cbf331794e9460e491b0ab34eb40802534e439c74f\" returns successfully" Feb 13 15:33:19.605486 containerd[1492]: time="2025-02-13T15:33:19.605367638Z" level=info msg="StopPodSandbox for \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\"" Feb 13 15:33:19.605538 containerd[1492]: time="2025-02-13T15:33:19.605523799Z" level=info msg="TearDown network for sandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\" successfully" Feb 13 15:33:19.605577 containerd[1492]: time="2025-02-13T15:33:19.605537679Z" level=info msg="StopPodSandbox for \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\" returns successfully" Feb 13 15:33:19.606342 containerd[1492]: time="2025-02-13T15:33:19.606228324Z" level=info msg="RemovePodSandbox for \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\"" Feb 13 15:33:19.606342 containerd[1492]: time="2025-02-13T15:33:19.606272124Z" level=info msg="Forcibly stopping sandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\"" Feb 13 15:33:19.606568 containerd[1492]: time="2025-02-13T15:33:19.606427285Z" level=info msg="TearDown network for sandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\" successfully" Feb 13 15:33:19.611223 containerd[1492]: time="2025-02-13T15:33:19.611114714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.611223 containerd[1492]: time="2025-02-13T15:33:19.611212674Z" level=info msg="RemovePodSandbox \"e412b4c49c7817d7dabc0da65c301be74f5c7c0656e817332218d979fa7762e3\" returns successfully" Feb 13 15:33:19.612134 containerd[1492]: time="2025-02-13T15:33:19.612043840Z" level=info msg="StopPodSandbox for \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\"" Feb 13 15:33:19.612282 containerd[1492]: time="2025-02-13T15:33:19.612191920Z" level=info msg="TearDown network for sandbox \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\" successfully" Feb 13 15:33:19.612282 containerd[1492]: time="2025-02-13T15:33:19.612207321Z" level=info msg="StopPodSandbox for \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\" returns successfully" Feb 13 15:33:19.615488 containerd[1492]: time="2025-02-13T15:33:19.613712570Z" level=info msg="RemovePodSandbox for \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\"" Feb 13 15:33:19.615488 containerd[1492]: time="2025-02-13T15:33:19.613823971Z" level=info msg="Forcibly stopping sandbox \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\"" Feb 13 15:33:19.615488 containerd[1492]: time="2025-02-13T15:33:19.614004132Z" level=info msg="TearDown network for sandbox \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\" successfully" Feb 13 15:33:19.620826 containerd[1492]: time="2025-02-13T15:33:19.620689093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.621169 containerd[1492]: time="2025-02-13T15:33:19.621141936Z" level=info msg="RemovePodSandbox \"3bb09667212c1275d96860333c5739abcf332d060a8174430b37a8d3db96fd8e\" returns successfully" Feb 13 15:33:19.622173 containerd[1492]: time="2025-02-13T15:33:19.622134702Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\"" Feb 13 15:33:19.622450 containerd[1492]: time="2025-02-13T15:33:19.622267663Z" level=info msg="TearDown network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" successfully" Feb 13 15:33:19.622450 containerd[1492]: time="2025-02-13T15:33:19.622278823Z" level=info msg="StopPodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" returns successfully" Feb 13 15:33:19.623784 containerd[1492]: time="2025-02-13T15:33:19.623637031Z" level=info msg="RemovePodSandbox for \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\"" Feb 13 15:33:19.624647 containerd[1492]: time="2025-02-13T15:33:19.623680631Z" level=info msg="Forcibly stopping sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\"" Feb 13 15:33:19.624812 containerd[1492]: time="2025-02-13T15:33:19.624751318Z" level=info msg="TearDown network for sandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" successfully" Feb 13 15:33:19.632110 containerd[1492]: time="2025-02-13T15:33:19.631673041Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.632110 containerd[1492]: time="2025-02-13T15:33:19.631829242Z" level=info msg="RemovePodSandbox \"bf0ed19c4fb8812938a45816f5cfdcde79a345a25abb7c5d4daeec27198f6d7d\" returns successfully" Feb 13 15:33:19.632716 containerd[1492]: time="2025-02-13T15:33:19.632655047Z" level=info msg="StopPodSandbox for \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\"" Feb 13 15:33:19.632825 containerd[1492]: time="2025-02-13T15:33:19.632788248Z" level=info msg="TearDown network for sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" successfully" Feb 13 15:33:19.632825 containerd[1492]: time="2025-02-13T15:33:19.632802688Z" level=info msg="StopPodSandbox for \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" returns successfully" Feb 13 15:33:19.633965 containerd[1492]: time="2025-02-13T15:33:19.633925455Z" level=info msg="RemovePodSandbox for \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\"" Feb 13 15:33:19.633965 containerd[1492]: time="2025-02-13T15:33:19.633969415Z" level=info msg="Forcibly stopping sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\"" Feb 13 15:33:19.634169 containerd[1492]: time="2025-02-13T15:33:19.634097936Z" level=info msg="TearDown network for sandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" successfully" Feb 13 15:33:19.643035 containerd[1492]: time="2025-02-13T15:33:19.642990550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.643473 containerd[1492]: time="2025-02-13T15:33:19.643256232Z" level=info msg="RemovePodSandbox \"417aaff5b81004311b665353002599fa2482ebd216e69bcc5c2527cb4ce58866\" returns successfully" Feb 13 15:33:19.644203 containerd[1492]: time="2025-02-13T15:33:19.643994957Z" level=info msg="StopPodSandbox for \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\"" Feb 13 15:33:19.644203 containerd[1492]: time="2025-02-13T15:33:19.644121077Z" level=info msg="TearDown network for sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\" successfully" Feb 13 15:33:19.644203 containerd[1492]: time="2025-02-13T15:33:19.644133238Z" level=info msg="StopPodSandbox for \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\" returns successfully" Feb 13 15:33:19.644577 containerd[1492]: time="2025-02-13T15:33:19.644549240Z" level=info msg="RemovePodSandbox for \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\"" Feb 13 15:33:19.644577 containerd[1492]: time="2025-02-13T15:33:19.644584240Z" level=info msg="Forcibly stopping sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\"" Feb 13 15:33:19.644734 containerd[1492]: time="2025-02-13T15:33:19.644680001Z" level=info msg="TearDown network for sandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\" successfully" Feb 13 15:33:19.648507 containerd[1492]: time="2025-02-13T15:33:19.648445104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.648651 containerd[1492]: time="2025-02-13T15:33:19.648545785Z" level=info msg="RemovePodSandbox \"573395d8c8f9523d35945a5c20c05b275b9d748097e5dba0bdcb0c0741b1e94f\" returns successfully" Feb 13 15:33:19.649276 containerd[1492]: time="2025-02-13T15:33:19.649085508Z" level=info msg="StopPodSandbox for \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\"" Feb 13 15:33:19.649276 containerd[1492]: time="2025-02-13T15:33:19.649200149Z" level=info msg="TearDown network for sandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\" successfully" Feb 13 15:33:19.649276 containerd[1492]: time="2025-02-13T15:33:19.649209549Z" level=info msg="StopPodSandbox for \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\" returns successfully" Feb 13 15:33:19.649789 containerd[1492]: time="2025-02-13T15:33:19.649645432Z" level=info msg="RemovePodSandbox for \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\"" Feb 13 15:33:19.649789 containerd[1492]: time="2025-02-13T15:33:19.649677912Z" level=info msg="Forcibly stopping sandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\"" Feb 13 15:33:19.650022 containerd[1492]: time="2025-02-13T15:33:19.649920833Z" level=info msg="TearDown network for sandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\" successfully" Feb 13 15:33:19.654219 containerd[1492]: time="2025-02-13T15:33:19.654106459Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.654219 containerd[1492]: time="2025-02-13T15:33:19.654208260Z" level=info msg="RemovePodSandbox \"ad03c26208b56be3f292848dfc5f2d27fba774fa078423d62c142758146267d4\" returns successfully" Feb 13 15:33:19.655437 containerd[1492]: time="2025-02-13T15:33:19.655041665Z" level=info msg="StopPodSandbox for \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\"" Feb 13 15:33:19.655437 containerd[1492]: time="2025-02-13T15:33:19.655265666Z" level=info msg="TearDown network for sandbox \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\" successfully" Feb 13 15:33:19.655437 containerd[1492]: time="2025-02-13T15:33:19.655287346Z" level=info msg="StopPodSandbox for \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\" returns successfully" Feb 13 15:33:19.656756 containerd[1492]: time="2025-02-13T15:33:19.656245952Z" level=info msg="RemovePodSandbox for \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\"" Feb 13 15:33:19.656756 containerd[1492]: time="2025-02-13T15:33:19.656301473Z" level=info msg="Forcibly stopping sandbox \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\"" Feb 13 15:33:19.656756 containerd[1492]: time="2025-02-13T15:33:19.656557634Z" level=info msg="TearDown network for sandbox \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\" successfully" Feb 13 15:33:19.662622 containerd[1492]: time="2025-02-13T15:33:19.662394350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.662622 containerd[1492]: time="2025-02-13T15:33:19.662484751Z" level=info msg="RemovePodSandbox \"b8c3e5daaa8e2bf9741cb9d42df3873cbbf7c1ac879bc2f06aba97b0cbb198c9\" returns successfully" Feb 13 15:33:19.663413 containerd[1492]: time="2025-02-13T15:33:19.663286316Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\"" Feb 13 15:33:19.663739 containerd[1492]: time="2025-02-13T15:33:19.663565917Z" level=info msg="TearDown network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" successfully" Feb 13 15:33:19.663739 containerd[1492]: time="2025-02-13T15:33:19.663587758Z" level=info msg="StopPodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" returns successfully" Feb 13 15:33:19.665993 containerd[1492]: time="2025-02-13T15:33:19.665955452Z" level=info msg="RemovePodSandbox for \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\"" Feb 13 15:33:19.665993 containerd[1492]: time="2025-02-13T15:33:19.666002252Z" level=info msg="Forcibly stopping sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\"" Feb 13 15:33:19.666147 containerd[1492]: time="2025-02-13T15:33:19.666101093Z" level=info msg="TearDown network for sandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" successfully" Feb 13 15:33:19.672779 containerd[1492]: time="2025-02-13T15:33:19.672510973Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.672779 containerd[1492]: time="2025-02-13T15:33:19.672621493Z" level=info msg="RemovePodSandbox \"db6f3898a83838f08131227333c815f115a1257c8939db5bc0567f9ef2be5943\" returns successfully" Feb 13 15:33:19.673703 containerd[1492]: time="2025-02-13T15:33:19.673484099Z" level=info msg="StopPodSandbox for \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\"" Feb 13 15:33:19.673703 containerd[1492]: time="2025-02-13T15:33:19.673621859Z" level=info msg="TearDown network for sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" successfully" Feb 13 15:33:19.673703 containerd[1492]: time="2025-02-13T15:33:19.673633900Z" level=info msg="StopPodSandbox for \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" returns successfully" Feb 13 15:33:19.675633 containerd[1492]: time="2025-02-13T15:33:19.674996188Z" level=info msg="RemovePodSandbox for \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\"" Feb 13 15:33:19.675633 containerd[1492]: time="2025-02-13T15:33:19.675033588Z" level=info msg="Forcibly stopping sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\"" Feb 13 15:33:19.675633 containerd[1492]: time="2025-02-13T15:33:19.675130109Z" level=info msg="TearDown network for sandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" successfully" Feb 13 15:33:19.679985 containerd[1492]: time="2025-02-13T15:33:19.679864378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.679985 containerd[1492]: time="2025-02-13T15:33:19.679948298Z" level=info msg="RemovePodSandbox \"ea194924c8d079849b6ef626fa0c697c500497f9ba67bd3401b582ee157730fc\" returns successfully" Feb 13 15:33:19.681131 containerd[1492]: time="2025-02-13T15:33:19.680886504Z" level=info msg="StopPodSandbox for \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\"" Feb 13 15:33:19.681131 containerd[1492]: time="2025-02-13T15:33:19.681038825Z" level=info msg="TearDown network for sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\" successfully" Feb 13 15:33:19.681131 containerd[1492]: time="2025-02-13T15:33:19.681052745Z" level=info msg="StopPodSandbox for \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\" returns successfully" Feb 13 15:33:19.681392 containerd[1492]: time="2025-02-13T15:33:19.681338867Z" level=info msg="RemovePodSandbox for \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\"" Feb 13 15:33:19.681392 containerd[1492]: time="2025-02-13T15:33:19.681376107Z" level=info msg="Forcibly stopping sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\"" Feb 13 15:33:19.681547 containerd[1492]: time="2025-02-13T15:33:19.681504308Z" level=info msg="TearDown network for sandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\" successfully" Feb 13 15:33:19.685630 containerd[1492]: time="2025-02-13T15:33:19.685532093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.685630 containerd[1492]: time="2025-02-13T15:33:19.685650934Z" level=info msg="RemovePodSandbox \"e6f52921475732e8f8f514aa05f2141ba6234f0132cd2066c8f88301abeac815\" returns successfully" Feb 13 15:33:19.686229 containerd[1492]: time="2025-02-13T15:33:19.686156337Z" level=info msg="StopPodSandbox for \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\"" Feb 13 15:33:19.686713 containerd[1492]: time="2025-02-13T15:33:19.686278818Z" level=info msg="TearDown network for sandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\" successfully" Feb 13 15:33:19.686713 containerd[1492]: time="2025-02-13T15:33:19.686289738Z" level=info msg="StopPodSandbox for \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\" returns successfully" Feb 13 15:33:19.688304 containerd[1492]: time="2025-02-13T15:33:19.687893748Z" level=info msg="RemovePodSandbox for \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\"" Feb 13 15:33:19.688304 containerd[1492]: time="2025-02-13T15:33:19.687959628Z" level=info msg="Forcibly stopping sandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\"" Feb 13 15:33:19.688304 containerd[1492]: time="2025-02-13T15:33:19.688114069Z" level=info msg="TearDown network for sandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\" successfully" Feb 13 15:33:19.692151 containerd[1492]: time="2025-02-13T15:33:19.692092933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.692498 containerd[1492]: time="2025-02-13T15:33:19.692388535Z" level=info msg="RemovePodSandbox \"a777880e53132fc2976f39b221012398e4ec1a04df25137b2f9966ad1d55a220\" returns successfully" Feb 13 15:33:19.693066 containerd[1492]: time="2025-02-13T15:33:19.693030139Z" level=info msg="StopPodSandbox for \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\"" Feb 13 15:33:19.693209 containerd[1492]: time="2025-02-13T15:33:19.693187900Z" level=info msg="TearDown network for sandbox \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\" successfully" Feb 13 15:33:19.693276 containerd[1492]: time="2025-02-13T15:33:19.693209180Z" level=info msg="StopPodSandbox for \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\" returns successfully" Feb 13 15:33:19.693783 containerd[1492]: time="2025-02-13T15:33:19.693750664Z" level=info msg="RemovePodSandbox for \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\"" Feb 13 15:33:19.693845 containerd[1492]: time="2025-02-13T15:33:19.693793464Z" level=info msg="Forcibly stopping sandbox \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\"" Feb 13 15:33:19.693912 containerd[1492]: time="2025-02-13T15:33:19.693893105Z" level=info msg="TearDown network for sandbox \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\" successfully" Feb 13 15:33:19.698861 containerd[1492]: time="2025-02-13T15:33:19.698784415Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.699194 containerd[1492]: time="2025-02-13T15:33:19.698890975Z" level=info msg="RemovePodSandbox \"c116eac342363c067154053d33c14aa3fd28ff70ed546a192e2428ac16de22f8\" returns successfully" Feb 13 15:33:19.700157 containerd[1492]: time="2025-02-13T15:33:19.699893702Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\"" Feb 13 15:33:19.700157 containerd[1492]: time="2025-02-13T15:33:19.700037382Z" level=info msg="TearDown network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" successfully" Feb 13 15:33:19.700157 containerd[1492]: time="2025-02-13T15:33:19.700048422Z" level=info msg="StopPodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" returns successfully" Feb 13 15:33:19.700741 containerd[1492]: time="2025-02-13T15:33:19.700501505Z" level=info msg="RemovePodSandbox for \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\"" Feb 13 15:33:19.700741 containerd[1492]: time="2025-02-13T15:33:19.700654026Z" level=info msg="Forcibly stopping sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\"" Feb 13 15:33:19.701466 containerd[1492]: time="2025-02-13T15:33:19.700891268Z" level=info msg="TearDown network for sandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" successfully" Feb 13 15:33:19.705803 containerd[1492]: time="2025-02-13T15:33:19.705737218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.706004 containerd[1492]: time="2025-02-13T15:33:19.705915819Z" level=info msg="RemovePodSandbox \"3cafa34bcc20602fb02150a943d515ab841d526e62bac4578a813a6f003eb3ba\" returns successfully" Feb 13 15:33:19.707759 containerd[1492]: time="2025-02-13T15:33:19.706609623Z" level=info msg="StopPodSandbox for \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\"" Feb 13 15:33:19.707759 containerd[1492]: time="2025-02-13T15:33:19.706858825Z" level=info msg="TearDown network for sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" successfully" Feb 13 15:33:19.707759 containerd[1492]: time="2025-02-13T15:33:19.706885305Z" level=info msg="StopPodSandbox for \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" returns successfully" Feb 13 15:33:19.708010 containerd[1492]: time="2025-02-13T15:33:19.707833391Z" level=info msg="RemovePodSandbox for \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\"" Feb 13 15:33:19.708010 containerd[1492]: time="2025-02-13T15:33:19.707869191Z" level=info msg="Forcibly stopping sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\"" Feb 13 15:33:19.708010 containerd[1492]: time="2025-02-13T15:33:19.708004392Z" level=info msg="TearDown network for sandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" successfully" Feb 13 15:33:19.713989 containerd[1492]: time="2025-02-13T15:33:19.713837548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.713989 containerd[1492]: time="2025-02-13T15:33:19.713934668Z" level=info msg="RemovePodSandbox \"708977237d336ab44c8b5e6e67d14e74a4ad7ee365744e406a27a64de004e1e3\" returns successfully" Feb 13 15:33:19.714903 containerd[1492]: time="2025-02-13T15:33:19.714862874Z" level=info msg="StopPodSandbox for \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\"" Feb 13 15:33:19.715120 containerd[1492]: time="2025-02-13T15:33:19.714992675Z" level=info msg="TearDown network for sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\" successfully" Feb 13 15:33:19.715120 containerd[1492]: time="2025-02-13T15:33:19.715006835Z" level=info msg="StopPodSandbox for \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\" returns successfully" Feb 13 15:33:19.716606 containerd[1492]: time="2025-02-13T15:33:19.715722559Z" level=info msg="RemovePodSandbox for \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\"" Feb 13 15:33:19.716606 containerd[1492]: time="2025-02-13T15:33:19.715767679Z" level=info msg="Forcibly stopping sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\"" Feb 13 15:33:19.716606 containerd[1492]: time="2025-02-13T15:33:19.715898320Z" level=info msg="TearDown network for sandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\" successfully" Feb 13 15:33:19.720560 containerd[1492]: time="2025-02-13T15:33:19.720502189Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.720863 containerd[1492]: time="2025-02-13T15:33:19.720835871Z" level=info msg="RemovePodSandbox \"e3f2e4032aa3e718047461985781dde1d33467ead2c78fe7278956a094571d72\" returns successfully" Feb 13 15:33:19.721691 containerd[1492]: time="2025-02-13T15:33:19.721649436Z" level=info msg="StopPodSandbox for \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\"" Feb 13 15:33:19.721828 containerd[1492]: time="2025-02-13T15:33:19.721781597Z" level=info msg="TearDown network for sandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\" successfully" Feb 13 15:33:19.721828 containerd[1492]: time="2025-02-13T15:33:19.721793237Z" level=info msg="StopPodSandbox for \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\" returns successfully" Feb 13 15:33:19.724809 containerd[1492]: time="2025-02-13T15:33:19.723318486Z" level=info msg="RemovePodSandbox for \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\"" Feb 13 15:33:19.724809 containerd[1492]: time="2025-02-13T15:33:19.723386486Z" level=info msg="Forcibly stopping sandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\"" Feb 13 15:33:19.724809 containerd[1492]: time="2025-02-13T15:33:19.723489287Z" level=info msg="TearDown network for sandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\" successfully" Feb 13 15:33:19.730056 containerd[1492]: time="2025-02-13T15:33:19.729988647Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.730319 containerd[1492]: time="2025-02-13T15:33:19.730079008Z" level=info msg="RemovePodSandbox \"a83c1a7c6c3c6f10b66ea0cfaee14072d40da691948f8fa1b8ef1b8845304575\" returns successfully" Feb 13 15:33:19.733401 containerd[1492]: time="2025-02-13T15:33:19.730919373Z" level=info msg="StopPodSandbox for \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\"" Feb 13 15:33:19.733401 containerd[1492]: time="2025-02-13T15:33:19.731049174Z" level=info msg="TearDown network for sandbox \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\" successfully" Feb 13 15:33:19.733401 containerd[1492]: time="2025-02-13T15:33:19.731062534Z" level=info msg="StopPodSandbox for \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\" returns successfully" Feb 13 15:33:19.734094 containerd[1492]: time="2025-02-13T15:33:19.734043552Z" level=info msg="RemovePodSandbox for \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\"" Feb 13 15:33:19.734182 containerd[1492]: time="2025-02-13T15:33:19.734100153Z" level=info msg="Forcibly stopping sandbox \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\"" Feb 13 15:33:19.734214 containerd[1492]: time="2025-02-13T15:33:19.734204073Z" level=info msg="TearDown network for sandbox \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\" successfully" Feb 13 15:33:19.741530 containerd[1492]: time="2025-02-13T15:33:19.740286871Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:33:19.741530 containerd[1492]: time="2025-02-13T15:33:19.740396511Z" level=info msg="RemovePodSandbox \"fe77cc08514fa6e72b1d412cf0ff82572aab6b454110c1dc1e2f2ac807c7ce1a\" returns successfully" Feb 13 15:33:30.055176 kubelet[2821]: I0213 15:33:30.053331 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:33:33.482989 systemd[1]: run-containerd-runc-k8s.io-d1f83c690d58a84a3d7c4454aaf103c44c35520ceeba3c5cd46d1c4c7af03c5f-runc.V1XS0f.mount: Deactivated successfully. Feb 13 15:33:45.738747 kubelet[2821]: I0213 15:33:45.738506 2821 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:34:22.655129 systemd[1]: run-containerd-runc-k8s.io-e69be219710439a253a6974aaf2d273e229560cb41ced0a2addc71f90019d8c8-runc.5XDeLr.mount: Deactivated successfully. Feb 13 15:35:03.483169 systemd[1]: run-containerd-runc-k8s.io-d1f83c690d58a84a3d7c4454aaf103c44c35520ceeba3c5cd46d1c4c7af03c5f-runc.vhGJAh.mount: Deactivated successfully. Feb 13 15:35:10.830814 systemd[1]: Started sshd@8-142.132.179.183:22-36.26.72.149:38114.service - OpenSSH per-connection server daemon (36.26.72.149:38114). Feb 13 15:35:12.106682 sshd[5979]: Invalid user quantum from 36.26.72.149 port 38114 Feb 13 15:35:12.347961 sshd[5979]: Received disconnect from 36.26.72.149 port 38114:11: Bye Bye [preauth] Feb 13 15:35:12.347961 sshd[5979]: Disconnected from invalid user quantum 36.26.72.149 port 38114 [preauth] Feb 13 15:35:12.351793 systemd[1]: sshd@8-142.132.179.183:22-36.26.72.149:38114.service: Deactivated successfully. Feb 13 15:35:36.190849 systemd[1]: run-containerd-runc-k8s.io-e69be219710439a253a6974aaf2d273e229560cb41ced0a2addc71f90019d8c8-runc.YVbf73.mount: Deactivated successfully. Feb 13 15:35:46.481226 systemd[1]: Started sshd@9-142.132.179.183:22-162.240.0.135:52540.service - OpenSSH per-connection server daemon (162.240.0.135:52540). Feb 13 15:35:47.154410 sshd[6061]: Invalid user yasumoto from 162.240.0.135 port 52540 Feb 13 15:35:47.322694 sshd[6061]: Connection closed by invalid user yasumoto 162.240.0.135 port 52540 [preauth] Feb 13 15:35:47.325663 systemd[1]: sshd@9-142.132.179.183:22-162.240.0.135:52540.service: Deactivated successfully. Feb 13 15:36:26.665762 systemd[1]: Started sshd@10-142.132.179.183:22-14.103.118.121:43318.service - OpenSSH per-connection server daemon (14.103.118.121:43318). Feb 13 15:36:39.914730 sshd[6150]: kex_exchange_identification: read: Connection reset by peer Feb 13 15:36:39.914730 sshd[6150]: Connection reset by 14.103.118.121 port 43318 Feb 13 15:36:39.916591 systemd[1]: sshd@10-142.132.179.183:22-14.103.118.121:43318.service: Deactivated successfully. Feb 13 15:37:01.060760 systemd[1]: Started sshd@11-142.132.179.183:22-36.26.72.149:39076.service - OpenSSH per-connection server daemon (36.26.72.149:39076). Feb 13 15:37:02.296042 sshd[6219]: Invalid user radius from 36.26.72.149 port 39076 Feb 13 15:37:02.541095 sshd[6219]: Received disconnect from 36.26.72.149 port 39076:11: Bye Bye [preauth] Feb 13 15:37:02.541095 sshd[6219]: Disconnected from invalid user radius 36.26.72.149 port 39076 [preauth] Feb 13 15:37:02.547036 systemd[1]: sshd@11-142.132.179.183:22-36.26.72.149:39076.service: Deactivated successfully. Feb 13 15:37:11.117061 systemd[1]: Started sshd@12-142.132.179.183:22-139.178.89.65:44066.service - OpenSSH per-connection server daemon (139.178.89.65:44066). Feb 13 15:37:12.118246 sshd[6250]: Accepted publickey for core from 139.178.89.65 port 44066 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:12.121287 sshd-session[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:12.129063 systemd-logind[1466]: New session 8 of user core. Feb 13 15:37:12.139713 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:37:12.908689 sshd[6252]: Connection closed by 139.178.89.65 port 44066 Feb 13 15:37:12.908527 sshd-session[6250]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:12.913730 systemd[1]: sshd@12-142.132.179.183:22-139.178.89.65:44066.service: Deactivated successfully. Feb 13 15:37:12.916130 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:37:12.918599 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:37:12.920024 systemd-logind[1466]: Removed session 8. Feb 13 15:37:18.083613 systemd[1]: Started sshd@13-142.132.179.183:22-139.178.89.65:58120.service - OpenSSH per-connection server daemon (139.178.89.65:58120). Feb 13 15:37:19.067505 sshd[6266]: Accepted publickey for core from 139.178.89.65 port 58120 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:19.068316 sshd-session[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:19.073412 systemd-logind[1466]: New session 9 of user core. Feb 13 15:37:19.079747 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:37:19.843435 sshd[6268]: Connection closed by 139.178.89.65 port 58120 Feb 13 15:37:19.844592 sshd-session[6266]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:19.849312 systemd[1]: sshd@13-142.132.179.183:22-139.178.89.65:58120.service: Deactivated successfully. Feb 13 15:37:19.852926 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:37:19.856621 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:37:19.858483 systemd-logind[1466]: Removed session 9. Feb 13 15:37:20.028331 systemd[1]: Started sshd@14-142.132.179.183:22-139.178.89.65:58134.service - OpenSSH per-connection server daemon (139.178.89.65:58134). Feb 13 15:37:21.015905 sshd[6282]: Accepted publickey for core from 139.178.89.65 port 58134 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:21.017885 sshd-session[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:21.024567 systemd-logind[1466]: New session 10 of user core. Feb 13 15:37:21.034700 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:37:21.830171 sshd[6284]: Connection closed by 139.178.89.65 port 58134 Feb 13 15:37:21.831329 sshd-session[6282]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:21.836300 systemd[1]: sshd@14-142.132.179.183:22-139.178.89.65:58134.service: Deactivated successfully. Feb 13 15:37:21.841865 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:37:21.844261 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:37:21.845868 systemd-logind[1466]: Removed session 10. Feb 13 15:37:22.009987 systemd[1]: Started sshd@15-142.132.179.183:22-139.178.89.65:58144.service - OpenSSH per-connection server daemon (139.178.89.65:58144). Feb 13 15:37:22.665690 systemd[1]: run-containerd-runc-k8s.io-e69be219710439a253a6974aaf2d273e229560cb41ced0a2addc71f90019d8c8-runc.3U7fjN.mount: Deactivated successfully. Feb 13 15:37:23.011186 sshd[6293]: Accepted publickey for core from 139.178.89.65 port 58144 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:23.013344 sshd-session[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:23.020211 systemd-logind[1466]: New session 11 of user core. Feb 13 15:37:23.025587 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:37:23.774343 sshd[6314]: Connection closed by 139.178.89.65 port 58144 Feb 13 15:37:23.775463 sshd-session[6293]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:23.781765 systemd[1]: sshd@15-142.132.179.183:22-139.178.89.65:58144.service: Deactivated successfully. Feb 13 15:37:23.785618 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:37:23.787166 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:37:23.788326 systemd-logind[1466]: Removed session 11. Feb 13 15:37:28.949829 systemd[1]: Started sshd@16-142.132.179.183:22-139.178.89.65:38750.service - OpenSSH per-connection server daemon (139.178.89.65:38750). Feb 13 15:37:29.939432 sshd[6329]: Accepted publickey for core from 139.178.89.65 port 38750 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:29.941824 sshd-session[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:29.950462 systemd-logind[1466]: New session 12 of user core. Feb 13 15:37:29.954660 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:37:30.708521 sshd[6332]: Connection closed by 139.178.89.65 port 38750 Feb 13 15:37:30.709525 sshd-session[6329]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:30.714729 systemd[1]: sshd@16-142.132.179.183:22-139.178.89.65:38750.service: Deactivated successfully. Feb 13 15:37:30.718530 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:37:30.720040 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:37:30.721519 systemd-logind[1466]: Removed session 12. Feb 13 15:37:35.884733 systemd[1]: Started sshd@17-142.132.179.183:22-139.178.89.65:43384.service - OpenSSH per-connection server daemon (139.178.89.65:43384). Feb 13 15:37:36.196099 systemd[1]: run-containerd-runc-k8s.io-e69be219710439a253a6974aaf2d273e229560cb41ced0a2addc71f90019d8c8-runc.uS85gE.mount: Deactivated successfully. Feb 13 15:37:36.867631 sshd[6366]: Accepted publickey for core from 139.178.89.65 port 43384 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:36.871319 sshd-session[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:36.879817 systemd-logind[1466]: New session 13 of user core. Feb 13 15:37:36.886722 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:37:37.637900 sshd[6387]: Connection closed by 139.178.89.65 port 43384 Feb 13 15:37:37.638953 sshd-session[6366]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:37.644895 systemd[1]: sshd@17-142.132.179.183:22-139.178.89.65:43384.service: Deactivated successfully. Feb 13 15:37:37.649507 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:37:37.650972 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:37:37.654808 systemd-logind[1466]: Removed session 13. Feb 13 15:37:37.817904 systemd[1]: Started sshd@18-142.132.179.183:22-139.178.89.65:43394.service - OpenSSH per-connection server daemon (139.178.89.65:43394). Feb 13 15:37:38.813724 sshd[6397]: Accepted publickey for core from 139.178.89.65 port 43394 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:38.816554 sshd-session[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:38.823182 systemd-logind[1466]: New session 14 of user core. Feb 13 15:37:38.830673 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:37:39.700340 sshd[6399]: Connection closed by 139.178.89.65 port 43394 Feb 13 15:37:39.701238 sshd-session[6397]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:39.707453 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:37:39.707486 systemd[1]: sshd@18-142.132.179.183:22-139.178.89.65:43394.service: Deactivated successfully. Feb 13 15:37:39.710037 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:37:39.714094 systemd-logind[1466]: Removed session 14. Feb 13 15:37:39.880064 systemd[1]: Started sshd@19-142.132.179.183:22-139.178.89.65:43402.service - OpenSSH per-connection server daemon (139.178.89.65:43402). Feb 13 15:37:40.865394 sshd[6408]: Accepted publickey for core from 139.178.89.65 port 43402 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:40.867078 sshd-session[6408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:40.873667 systemd-logind[1466]: New session 15 of user core. Feb 13 15:37:40.878666 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:37:43.572941 sshd[6410]: Connection closed by 139.178.89.65 port 43402 Feb 13 15:37:43.574545 sshd-session[6408]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:43.582798 systemd[1]: sshd@19-142.132.179.183:22-139.178.89.65:43402.service: Deactivated successfully. Feb 13 15:37:43.585703 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:37:43.588898 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:37:43.590504 systemd-logind[1466]: Removed session 15. Feb 13 15:37:43.750687 systemd[1]: Started sshd@20-142.132.179.183:22-139.178.89.65:43416.service - OpenSSH per-connection server daemon (139.178.89.65:43416). Feb 13 15:37:44.750387 sshd[6427]: Accepted publickey for core from 139.178.89.65 port 43416 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:44.751573 sshd-session[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:44.760087 systemd-logind[1466]: New session 16 of user core. Feb 13 15:37:44.765812 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:37:45.668053 sshd[6429]: Connection closed by 139.178.89.65 port 43416 Feb 13 15:37:45.668864 sshd-session[6427]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:45.675447 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:37:45.675858 systemd[1]: sshd@20-142.132.179.183:22-139.178.89.65:43416.service: Deactivated successfully. Feb 13 15:37:45.679313 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:37:45.680767 systemd-logind[1466]: Removed session 16. Feb 13 15:37:45.849914 systemd[1]: Started sshd@21-142.132.179.183:22-139.178.89.65:40346.service - OpenSSH per-connection server daemon (139.178.89.65:40346). Feb 13 15:37:46.855883 sshd[6450]: Accepted publickey for core from 139.178.89.65 port 40346 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:46.858117 sshd-session[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:46.864627 systemd-logind[1466]: New session 17 of user core. Feb 13 15:37:46.869588 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:37:47.641453 sshd[6457]: Connection closed by 139.178.89.65 port 40346 Feb 13 15:37:47.642617 sshd-session[6450]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:47.649037 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:37:47.649920 systemd[1]: sshd@21-142.132.179.183:22-139.178.89.65:40346.service: Deactivated successfully. Feb 13 15:37:47.655678 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:37:47.658055 systemd-logind[1466]: Removed session 17. Feb 13 15:37:52.817483 systemd[1]: Started sshd@22-142.132.179.183:22-139.178.89.65:40360.service - OpenSSH per-connection server daemon (139.178.89.65:40360). Feb 13 15:37:53.803624 sshd[6492]: Accepted publickey for core from 139.178.89.65 port 40360 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:37:53.805930 sshd-session[6492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:37:53.812934 systemd-logind[1466]: New session 18 of user core. Feb 13 15:37:53.822626 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:37:54.575400 sshd[6494]: Connection closed by 139.178.89.65 port 40360 Feb 13 15:37:54.576623 sshd-session[6492]: pam_unix(sshd:session): session closed for user core Feb 13 15:37:54.581978 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:37:54.583105 systemd[1]: sshd@22-142.132.179.183:22-139.178.89.65:40360.service: Deactivated successfully. Feb 13 15:37:54.593109 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:37:54.596169 systemd-logind[1466]: Removed session 18. Feb 13 15:37:59.763556 systemd[1]: Started sshd@23-142.132.179.183:22-139.178.89.65:36998.service - OpenSSH per-connection server daemon (139.178.89.65:36998). Feb 13 15:38:00.758098 sshd[6504]: Accepted publickey for core from 139.178.89.65 port 36998 ssh2: RSA SHA256:Uozn9z6525dahd1u4B5WCCi8tKj4bLjcDsCj6OgO54I Feb 13 15:38:00.760170 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:38:00.766749 systemd-logind[1466]: New session 19 of user core. Feb 13 15:38:00.772318 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:38:01.533671 sshd[6506]: Connection closed by 139.178.89.65 port 36998 Feb 13 15:38:01.533568 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Feb 13 15:38:01.539410 systemd[1]: sshd@23-142.132.179.183:22-139.178.89.65:36998.service: Deactivated successfully. Feb 13 15:38:01.543301 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:38:01.545102 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:38:01.546296 systemd-logind[1466]: Removed session 19. Feb 13 15:38:16.872674 systemd[1]: cri-containerd-72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01.scope: Deactivated successfully. Feb 13 15:38:16.872991 systemd[1]: cri-containerd-72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01.scope: Consumed 6.985s CPU time. Feb 13 15:38:16.901374 containerd[1492]: time="2025-02-13T15:38:16.901119208Z" level=info msg="shim disconnected" id=72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01 namespace=k8s.io Feb 13 15:38:16.901374 containerd[1492]: time="2025-02-13T15:38:16.901230568Z" level=warning msg="cleaning up after shim disconnected" id=72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01 namespace=k8s.io Feb 13 15:38:16.901374 containerd[1492]: time="2025-02-13T15:38:16.901243648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:16.903622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01-rootfs.mount: Deactivated successfully. Feb 13 15:38:17.194151 kubelet[2821]: I0213 15:38:17.192747 2821 scope.go:117] "RemoveContainer" containerID="72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01" Feb 13 15:38:17.196862 containerd[1492]: time="2025-02-13T15:38:17.196648817Z" level=info msg="CreateContainer within sandbox \"dda144a600ef81815ad945f89c62ef63c4a429595f0cbe0a3281d4702eb7a32b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 13 15:38:17.216396 containerd[1492]: time="2025-02-13T15:38:17.216010049Z" level=info msg="CreateContainer within sandbox \"dda144a600ef81815ad945f89c62ef63c4a429595f0cbe0a3281d4702eb7a32b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0595df7e6e4e5300ee9fd8184a810749dd9b5b04dea351cb2ec558d85ca36ec8\"" Feb 13 15:38:17.216746 containerd[1492]: time="2025-02-13T15:38:17.216711610Z" level=info msg="StartContainer for \"0595df7e6e4e5300ee9fd8184a810749dd9b5b04dea351cb2ec558d85ca36ec8\"" Feb 13 15:38:17.217132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3146554931.mount: Deactivated successfully. Feb 13 15:38:17.258631 systemd[1]: Started cri-containerd-0595df7e6e4e5300ee9fd8184a810749dd9b5b04dea351cb2ec558d85ca36ec8.scope - libcontainer container 0595df7e6e4e5300ee9fd8184a810749dd9b5b04dea351cb2ec558d85ca36ec8. Feb 13 15:38:17.291502 containerd[1492]: time="2025-02-13T15:38:17.291340214Z" level=info msg="StartContainer for \"0595df7e6e4e5300ee9fd8184a810749dd9b5b04dea351cb2ec558d85ca36ec8\" returns successfully" Feb 13 15:38:17.325038 kubelet[2821]: E0213 15:38:17.324986 2821 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49278->10.0.0.2:2379: read: connection timed out" Feb 13 15:38:17.720209 systemd[1]: cri-containerd-a717110b2792408e97d72087ff22998f466acec1488e5fb5f2f826feadb2e54e.scope: Deactivated successfully. Feb 13 15:38:17.721000 systemd[1]: cri-containerd-a717110b2792408e97d72087ff22998f466acec1488e5fb5f2f826feadb2e54e.scope: Consumed 6.285s CPU time, 20.2M memory peak, 0B memory swap peak. Feb 13 15:38:17.753850 containerd[1492]: time="2025-02-13T15:38:17.753764499Z" level=info msg="shim disconnected" id=a717110b2792408e97d72087ff22998f466acec1488e5fb5f2f826feadb2e54e namespace=k8s.io Feb 13 15:38:17.753850 containerd[1492]: time="2025-02-13T15:38:17.753826899Z" level=warning msg="cleaning up after shim disconnected" id=a717110b2792408e97d72087ff22998f466acec1488e5fb5f2f826feadb2e54e namespace=k8s.io Feb 13 15:38:17.753850 containerd[1492]: time="2025-02-13T15:38:17.753835179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:17.903035 systemd[1]: run-containerd-runc-k8s.io-0595df7e6e4e5300ee9fd8184a810749dd9b5b04dea351cb2ec558d85ca36ec8-runc.zHGyn8.mount: Deactivated successfully. Feb 13 15:38:17.903195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a717110b2792408e97d72087ff22998f466acec1488e5fb5f2f826feadb2e54e-rootfs.mount: Deactivated successfully. Feb 13 15:38:18.200066 kubelet[2821]: I0213 15:38:18.200015 2821 scope.go:117] "RemoveContainer" containerID="a717110b2792408e97d72087ff22998f466acec1488e5fb5f2f826feadb2e54e" Feb 13 15:38:18.202893 containerd[1492]: time="2025-02-13T15:38:18.202845004Z" level=info msg="CreateContainer within sandbox \"81cb8fddaf3286e3ba0fa1afa5a5ac4d75a9df11b51ca7bbf927313f132e504b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:38:18.223951 containerd[1492]: time="2025-02-13T15:38:18.223795999Z" level=info msg="CreateContainer within sandbox \"81cb8fddaf3286e3ba0fa1afa5a5ac4d75a9df11b51ca7bbf927313f132e504b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6468af9e1e7c2e74a5b9e16c462ec2b3bf237fc1d70455add3be9f39fecdc302\"" Feb 13 15:38:18.224444 containerd[1492]: time="2025-02-13T15:38:18.224420440Z" level=info msg="StartContainer for \"6468af9e1e7c2e74a5b9e16c462ec2b3bf237fc1d70455add3be9f39fecdc302\"" Feb 13 15:38:18.262660 systemd[1]: Started cri-containerd-6468af9e1e7c2e74a5b9e16c462ec2b3bf237fc1d70455add3be9f39fecdc302.scope - libcontainer container 6468af9e1e7c2e74a5b9e16c462ec2b3bf237fc1d70455add3be9f39fecdc302. Feb 13 15:38:18.306174 containerd[1492]: time="2025-02-13T15:38:18.304325333Z" level=info msg="StartContainer for \"6468af9e1e7c2e74a5b9e16c462ec2b3bf237fc1d70455add3be9f39fecdc302\" returns successfully" Feb 13 15:38:18.668660 systemd[1]: cri-containerd-0595df7e6e4e5300ee9fd8184a810749dd9b5b04dea351cb2ec558d85ca36ec8.scope: Deactivated successfully. Feb 13 15:38:18.701365 containerd[1492]: time="2025-02-13T15:38:18.701073271Z" level=info msg="shim disconnected" id=0595df7e6e4e5300ee9fd8184a810749dd9b5b04dea351cb2ec558d85ca36ec8 namespace=k8s.io Feb 13 15:38:18.701365 containerd[1492]: time="2025-02-13T15:38:18.701159352Z" level=warning msg="cleaning up after shim disconnected" id=0595df7e6e4e5300ee9fd8184a810749dd9b5b04dea351cb2ec558d85ca36ec8 namespace=k8s.io Feb 13 15:38:18.701365 containerd[1492]: time="2025-02-13T15:38:18.701170032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:38:18.903845 systemd[1]: run-containerd-runc-k8s.io-6468af9e1e7c2e74a5b9e16c462ec2b3bf237fc1d70455add3be9f39fecdc302-runc.48T1N6.mount: Deactivated successfully. Feb 13 15:38:18.904017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0595df7e6e4e5300ee9fd8184a810749dd9b5b04dea351cb2ec558d85ca36ec8-rootfs.mount: Deactivated successfully. Feb 13 15:38:19.210312 kubelet[2821]: I0213 15:38:19.210216 2821 scope.go:117] "RemoveContainer" containerID="72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01" Feb 13 15:38:19.212252 kubelet[2821]: I0213 15:38:19.211680 2821 scope.go:117] "RemoveContainer" containerID="0595df7e6e4e5300ee9fd8184a810749dd9b5b04dea351cb2ec558d85ca36ec8" Feb 13 15:38:19.212252 kubelet[2821]: E0213 15:38:19.212001 2821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-c7ccbd65-dtv65_tigera-operator(49c41b27-698c-4ff0-8278-54529621f9bc)\"" pod="tigera-operator/tigera-operator-c7ccbd65-dtv65" podUID="49c41b27-698c-4ff0-8278-54529621f9bc" Feb 13 15:38:19.213523 containerd[1492]: time="2025-02-13T15:38:19.213405763Z" level=info msg="RemoveContainer for \"72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01\"" Feb 13 15:38:19.218050 containerd[1492]: time="2025-02-13T15:38:19.217675890Z" level=info msg="RemoveContainer for \"72f663f2006c8426f3d758112a15f22b183363562a9ac5d2bcfed5fef84e8f01\" returns successfully" Feb 13 15:38:21.353264 kubelet[2821]: E0213 15:38:21.353224 2821 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:49086->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4186-1-1-6-ce8ef0549e.1823cea566e3a02b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4186-1-1-6-ce8ef0549e,UID:113024124467c3d4afa32dd72de5c3a5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4186-1-1-6-ce8ef0549e,},FirstTimestamp:2025-02-13 15:38:10.886262827 +0000 UTC m=+351.619178638,LastTimestamp:2025-02-13 15:38:10.886262827 +0000 UTC m=+351.619178638,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-1-6-ce8ef0549e,}"