Feb 13 15:24:48.868700 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:24:48.868725 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025 Feb 13 15:24:48.868735 kernel: KASLR enabled Feb 13 15:24:48.868741 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 15:24:48.868746 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Feb 13 15:24:48.868752 kernel: random: crng init done Feb 13 15:24:48.868759 kernel: secureboot: Secure boot disabled Feb 13 15:24:48.868765 kernel: ACPI: Early table checksum verification disabled Feb 13 15:24:48.868770 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 15:24:48.868778 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:24:48.868784 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:48.868790 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:48.868795 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:48.868801 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:48.868808 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:48.868816 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:48.868822 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:48.868828 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:48.868835 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:48.868841 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:24:48.868847 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 15:24:48.868853 kernel: NUMA: Failed to initialise from firmware Feb 13 15:24:48.868859 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:24:48.868866 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 15:24:48.868872 kernel: Zone ranges: Feb 13 15:24:48.868879 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:24:48.868885 kernel: DMA32 empty Feb 13 15:24:48.868892 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 15:24:48.868898 kernel: Movable zone start for each node Feb 13 15:24:48.874016 kernel: Early memory node ranges Feb 13 15:24:48.874049 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Feb 13 15:24:48.874056 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Feb 13 15:24:48.874063 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Feb 13 15:24:48.874069 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 15:24:48.874075 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 15:24:48.874082 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 15:24:48.874088 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 15:24:48.874103 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 15:24:48.874109 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 15:24:48.874116 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:24:48.874126 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 15:24:48.874132 kernel: psci: probing for conduit method from ACPI. Feb 13 15:24:48.874139 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:24:48.874147 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:24:48.874154 kernel: psci: Trusted OS migration not required Feb 13 15:24:48.874160 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:24:48.874167 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:24:48.874174 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:24:48.874181 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:24:48.874188 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:24:48.874194 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:24:48.874201 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:24:48.874208 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:24:48.874216 kernel: CPU features: detected: Spectre-v4 Feb 13 15:24:48.874223 kernel: CPU features: detected: Spectre-BHB Feb 13 15:24:48.874229 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:24:48.874236 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:24:48.874243 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:24:48.874249 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:24:48.874256 kernel: alternatives: applying boot alternatives Feb 13 15:24:48.874264 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:24:48.874271 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:24:48.874278 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:24:48.874285 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:24:48.874293 kernel: Fallback order for Node 0: 0 Feb 13 15:24:48.874300 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 15:24:48.874306 kernel: Policy zone: Normal Feb 13 15:24:48.874313 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:24:48.874319 kernel: software IO TLB: area num 2. Feb 13 15:24:48.874326 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 15:24:48.874333 kernel: Memory: 3883896K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 212104K reserved, 0K cma-reserved) Feb 13 15:24:48.874340 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:24:48.874346 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:24:48.874354 kernel: rcu: RCU event tracing is enabled. Feb 13 15:24:48.874362 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:24:48.874370 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:24:48.874380 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:24:48.874386 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:24:48.874394 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:24:48.874402 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:24:48.874408 kernel: GICv3: 256 SPIs implemented Feb 13 15:24:48.874415 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:24:48.874423 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:24:48.874431 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:24:48.874438 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:24:48.874445 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:24:48.874452 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:24:48.874461 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:24:48.874467 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 15:24:48.874474 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 15:24:48.874481 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:24:48.874487 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:24:48.874494 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:24:48.874501 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:24:48.874507 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:24:48.874514 kernel: Console: colour dummy device 80x25 Feb 13 15:24:48.874521 kernel: ACPI: Core revision 20230628 Feb 13 15:24:48.874528 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:24:48.874536 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:24:48.874543 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:24:48.874551 kernel: landlock: Up and running. Feb 13 15:24:48.874557 kernel: SELinux: Initializing. Feb 13 15:24:48.874564 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:24:48.874571 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:24:48.874578 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:24:48.874585 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:24:48.874592 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:24:48.874600 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:24:48.874607 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:24:48.874614 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:24:48.874621 kernel: Remapping and enabling EFI services. Feb 13 15:24:48.874628 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:24:48.874635 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:24:48.874642 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:24:48.874649 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 15:24:48.874656 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:24:48.874664 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:24:48.874671 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:24:48.874683 kernel: SMP: Total of 2 processors activated. Feb 13 15:24:48.874691 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:24:48.874699 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:24:48.874707 kernel: CPU features: detected: Common not Private translations Feb 13 15:24:48.874714 kernel: CPU features: detected: CRC32 instructions Feb 13 15:24:48.874721 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:24:48.874728 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:24:48.874737 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:24:48.874744 kernel: CPU features: detected: Privileged Access Never Feb 13 15:24:48.874751 kernel: CPU features: detected: RAS Extension Support Feb 13 15:24:48.874758 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:24:48.874765 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:24:48.874773 kernel: alternatives: applying system-wide alternatives Feb 13 15:24:48.874780 kernel: devtmpfs: initialized Feb 13 15:24:48.874787 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:24:48.874796 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:24:48.874803 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:24:48.874810 kernel: SMBIOS 3.0.0 present. Feb 13 15:24:48.874817 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 15:24:48.874827 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:24:48.874835 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:24:48.874845 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:24:48.874853 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:24:48.874860 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:24:48.874869 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Feb 13 15:24:48.874876 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:24:48.874883 kernel: cpuidle: using governor menu Feb 13 15:24:48.874892 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:24:48.874900 kernel: ASID allocator initialised with 32768 entries Feb 13 15:24:48.875008 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:24:48.875020 kernel: Serial: AMBA PL011 UART driver Feb 13 15:24:48.875028 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:24:48.875037 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:24:48.875047 kernel: Modules: 509280 pages in range for PLT usage Feb 13 15:24:48.875054 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:24:48.875062 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:24:48.875069 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:24:48.875076 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:24:48.875083 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:24:48.875091 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:24:48.875098 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:24:48.875105 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:24:48.875113 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:24:48.875121 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:24:48.875131 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:24:48.875140 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:24:48.875147 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:24:48.875154 kernel: ACPI: Interpreter enabled Feb 13 15:24:48.875161 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:24:48.875168 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:24:48.875176 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:24:48.875184 kernel: printk: console [ttyAMA0] enabled Feb 13 15:24:48.875192 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:24:48.875361 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:24:48.875440 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:24:48.875506 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:24:48.875569 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:24:48.875633 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:24:48.875644 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:24:48.875651 kernel: PCI host bridge to bus 0000:00 Feb 13 15:24:48.875722 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:24:48.875792 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:24:48.875849 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:24:48.877978 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:24:48.878128 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:24:48.878223 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 15:24:48.878292 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 15:24:48.878359 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:24:48.878432 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:24:48.878498 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 15:24:48.878571 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 15:24:48.878641 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 15:24:48.878711 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 15:24:48.878776 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 15:24:48.878847 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 15:24:48.878928 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 15:24:48.879023 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 15:24:48.879094 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 15:24:48.879171 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 15:24:48.879237 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 15:24:48.879308 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 15:24:48.879373 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 15:24:48.879450 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 15:24:48.879518 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 15:24:48.879591 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:24:48.879657 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 15:24:48.879728 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 15:24:48.879793 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 15:24:48.879868 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:24:48.884596 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 15:24:48.884708 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:24:48.884781 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:24:48.884868 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 15:24:48.885079 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 15:24:48.885169 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 15:24:48.885240 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 15:24:48.885314 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 15:24:48.885390 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 15:24:48.885460 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 15:24:48.885536 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 15:24:48.885607 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 15:24:48.885676 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 15:24:48.885758 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 15:24:48.885830 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 15:24:48.885900 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:24:48.886017 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:24:48.886091 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 15:24:48.886163 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 15:24:48.886232 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:24:48.886309 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 15:24:48.886380 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:24:48.886447 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:24:48.886519 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 15:24:48.886584 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 15:24:48.886649 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 15:24:48.886718 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 15:24:48.886786 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:24:48.886851 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:24:48.888999 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 15:24:48.889103 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 15:24:48.889171 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 15:24:48.889241 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 15:24:48.889306 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:24:48.889372 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:24:48.889456 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 15:24:48.889529 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:24:48.889606 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:24:48.889687 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 15:24:48.889763 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:24:48.889840 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:24:48.889948 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 15:24:48.890102 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:24:48.890171 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:24:48.890242 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 15:24:48.890308 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:24:48.890373 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:24:48.890442 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 15:24:48.890508 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:24:48.890580 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 15:24:48.890646 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:24:48.890714 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 15:24:48.890780 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:24:48.890845 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 15:24:48.892755 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:24:48.892872 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 15:24:48.893129 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:24:48.893208 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 15:24:48.893275 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:24:48.893343 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 15:24:48.893408 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:24:48.893474 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 15:24:48.893540 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:24:48.893612 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 15:24:48.893677 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:24:48.893747 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 15:24:48.893811 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 15:24:48.893877 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 15:24:48.893981 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 15:24:48.894059 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 15:24:48.894131 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 15:24:48.894201 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 15:24:48.894267 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 15:24:48.894332 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 15:24:48.894395 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 15:24:48.894459 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 15:24:48.894523 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 15:24:48.894587 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 15:24:48.894653 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 15:24:48.894718 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 15:24:48.894781 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 15:24:48.894845 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 15:24:48.896946 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 15:24:48.897068 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 15:24:48.897140 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 15:24:48.897210 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 15:24:48.897284 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 15:24:48.897360 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:24:48.897428 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 15:24:48.897494 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 15:24:48.897559 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 15:24:48.897623 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 15:24:48.897687 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:24:48.897758 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 15:24:48.897828 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 15:24:48.897893 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 15:24:48.900029 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 15:24:48.900109 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:24:48.900184 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:24:48.900260 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 15:24:48.900330 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 15:24:48.900395 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 15:24:48.900459 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 15:24:48.900521 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:24:48.900593 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:24:48.900660 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 15:24:48.903030 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 15:24:48.903132 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 15:24:48.903202 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:24:48.903278 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 15:24:48.903347 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 15:24:48.903415 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 15:24:48.903480 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 15:24:48.903544 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 15:24:48.903620 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:24:48.903697 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 15:24:48.903764 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 15:24:48.903833 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 15:24:48.903898 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 15:24:48.904043 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 15:24:48.904114 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:24:48.904188 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 15:24:48.904262 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 15:24:48.904331 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 15:24:48.904399 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 15:24:48.904464 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 15:24:48.904529 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 15:24:48.904594 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:24:48.904665 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 15:24:48.904730 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 15:24:48.904799 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 15:24:48.904863 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:24:48.905046 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 15:24:48.905123 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 15:24:48.905189 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 15:24:48.905256 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:24:48.905326 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:24:48.905387 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:24:48.905452 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:24:48.905525 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 15:24:48.906429 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 15:24:48.906553 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:24:48.906626 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 15:24:48.906687 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 15:24:48.906752 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:24:48.906831 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 15:24:48.906894 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 15:24:48.907465 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:24:48.907549 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 15:24:48.907612 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 15:24:48.907672 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:24:48.907750 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 15:24:48.907819 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 15:24:48.907882 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:24:48.909095 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 15:24:48.909183 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 15:24:48.909245 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:24:48.909312 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 15:24:48.909372 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 15:24:48.909431 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:24:48.909504 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 15:24:48.909564 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 15:24:48.909629 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:24:48.909696 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 15:24:48.909756 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 15:24:48.909815 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:24:48.909825 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:24:48.909832 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:24:48.909840 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:24:48.909848 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:24:48.909858 kernel: iommu: Default domain type: Translated Feb 13 15:24:48.909865 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:24:48.909873 kernel: efivars: Registered efivars operations Feb 13 15:24:48.909880 kernel: vgaarb: loaded Feb 13 15:24:48.909888 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:24:48.909895 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:24:48.909917 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:24:48.911026 kernel: pnp: PnP ACPI init Feb 13 15:24:48.911196 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:24:48.911218 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:24:48.911226 kernel: NET: Registered PF_INET protocol family Feb 13 15:24:48.911234 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:24:48.911244 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:24:48.911252 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:24:48.911260 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:24:48.911268 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:24:48.911276 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:24:48.911287 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:24:48.911297 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:24:48.911305 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:24:48.911391 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 15:24:48.911403 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:24:48.911411 kernel: kvm [1]: HYP mode not available Feb 13 15:24:48.911418 kernel: Initialise system trusted keyrings Feb 13 15:24:48.911427 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:24:48.911434 kernel: Key type asymmetric registered Feb 13 15:24:48.911443 kernel: Asymmetric key parser 'x509' registered Feb 13 15:24:48.911451 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:24:48.911459 kernel: io scheduler mq-deadline registered Feb 13 15:24:48.911466 kernel: io scheduler kyber registered Feb 13 15:24:48.911474 kernel: io scheduler bfq registered Feb 13 15:24:48.911482 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:24:48.911553 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 15:24:48.911619 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 15:24:48.911686 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:24:48.911757 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 15:24:48.911822 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 15:24:48.911886 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:24:48.914068 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 15:24:48.914157 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 15:24:48.914234 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:24:48.914307 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 15:24:48.914374 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 15:24:48.914440 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:24:48.914508 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 15:24:48.914575 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 15:24:48.914646 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:24:48.914717 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 15:24:48.914786 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 15:24:48.914852 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:24:48.915036 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 15:24:48.915113 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 15:24:48.915182 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:24:48.915254 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 15:24:48.915322 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 15:24:48.915386 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:24:48.915396 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 15:24:48.915462 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 15:24:48.915529 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 15:24:48.915593 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:24:48.915603 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:24:48.915611 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:24:48.915618 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:24:48.915694 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 15:24:48.915778 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 15:24:48.915790 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:24:48.915803 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:24:48.915884 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 15:24:48.915895 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 15:24:48.916356 kernel: thunder_xcv, ver 1.0 Feb 13 15:24:48.916381 kernel: thunder_bgx, ver 1.0 Feb 13 15:24:48.916388 kernel: nicpf, ver 1.0 Feb 13 15:24:48.916396 kernel: nicvf, ver 1.0 Feb 13 15:24:48.916513 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:24:48.916587 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:24:48 UTC (1739460288) Feb 13 15:24:48.916597 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:24:48.916605 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:24:48.916612 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:24:48.916620 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:24:48.916628 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:24:48.916635 kernel: Segment Routing with IPv6 Feb 13 15:24:48.916642 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:24:48.916650 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:24:48.916660 kernel: Key type dns_resolver registered Feb 13 15:24:48.916667 kernel: registered taskstats version 1 Feb 13 15:24:48.916675 kernel: Loading compiled-in X.509 certificates Feb 13 15:24:48.916683 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4' Feb 13 15:24:48.916690 kernel: Key type .fscrypt registered Feb 13 15:24:48.916698 kernel: Key type fscrypt-provisioning registered Feb 13 15:24:48.916705 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:24:48.916713 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:24:48.916720 kernel: ima: No architecture policies found Feb 13 15:24:48.916729 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:24:48.916737 kernel: clk: Disabling unused clocks Feb 13 15:24:48.916744 kernel: Freeing unused kernel memory: 38336K Feb 13 15:24:48.916751 kernel: Run /init as init process Feb 13 15:24:48.916759 kernel: with arguments: Feb 13 15:24:48.916767 kernel: /init Feb 13 15:24:48.916774 kernel: with environment: Feb 13 15:24:48.916783 kernel: HOME=/ Feb 13 15:24:48.916790 kernel: TERM=linux Feb 13 15:24:48.916799 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:24:48.916807 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:24:48.916818 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:24:48.916827 systemd[1]: Detected virtualization kvm. Feb 13 15:24:48.916835 systemd[1]: Detected architecture arm64. Feb 13 15:24:48.916842 systemd[1]: Running in initrd. Feb 13 15:24:48.916850 systemd[1]: No hostname configured, using default hostname. Feb 13 15:24:48.916860 systemd[1]: Hostname set to . Feb 13 15:24:48.916867 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:24:48.916875 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:24:48.916883 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:24:48.916892 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:24:48.916900 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:24:48.916926 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:24:48.916937 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:24:48.916950 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:24:48.916972 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:24:48.916980 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:24:48.916989 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:24:48.916997 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:24:48.917006 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:24:48.917014 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:24:48.917025 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:24:48.917033 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:24:48.917041 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:24:48.917049 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:24:48.917057 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:24:48.917066 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:24:48.917074 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:24:48.917082 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:24:48.917090 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:24:48.917099 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:24:48.917107 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:24:48.917115 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:24:48.917123 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:24:48.917131 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:24:48.917139 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:24:48.917147 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:24:48.917155 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:48.917165 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:24:48.917173 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:24:48.917210 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 15:24:48.917233 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:24:48.917242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:24:48.917250 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:48.917259 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:24:48.917267 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:24:48.917275 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:24:48.917285 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:24:48.917293 kernel: Bridge firewalling registered Feb 13 15:24:48.917300 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:24:48.917309 systemd-journald[238]: Journal started Feb 13 15:24:48.917329 systemd-journald[238]: Runtime Journal (/run/log/journal/c2551dacca5b411e9ebb23e8cc4a714e) is 8M, max 76.6M, 68.6M free. Feb 13 15:24:48.891523 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 15:24:48.920052 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:24:48.915209 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 15:24:48.920729 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:24:48.933318 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:24:48.940127 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:24:48.941918 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:48.943852 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:24:48.948037 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:24:48.961524 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:24:48.966233 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:24:48.979488 dracut-cmdline[272]: dracut-dracut-053 Feb 13 15:24:48.985364 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef Feb 13 15:24:49.006777 systemd-resolved[275]: Positive Trust Anchors: Feb 13 15:24:49.006798 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:24:49.006829 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:24:49.012431 systemd-resolved[275]: Defaulting to hostname 'linux'. Feb 13 15:24:49.014024 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:24:49.016277 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:24:49.091936 kernel: SCSI subsystem initialized Feb 13 15:24:49.096989 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:24:49.104942 kernel: iscsi: registered transport (tcp) Feb 13 15:24:49.117973 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:24:49.118062 kernel: QLogic iSCSI HBA Driver Feb 13 15:24:49.167897 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:24:49.175110 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:24:49.194023 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:24:49.194088 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:24:49.194100 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:24:49.244977 kernel: raid6: neonx8 gen() 15669 MB/s Feb 13 15:24:49.262004 kernel: raid6: neonx4 gen() 15728 MB/s Feb 13 15:24:49.278969 kernel: raid6: neonx2 gen() 13168 MB/s Feb 13 15:24:49.296026 kernel: raid6: neonx1 gen() 10447 MB/s Feb 13 15:24:49.312965 kernel: raid6: int64x8 gen() 6755 MB/s Feb 13 15:24:49.329990 kernel: raid6: int64x4 gen() 7312 MB/s Feb 13 15:24:49.347017 kernel: raid6: int64x2 gen() 6080 MB/s Feb 13 15:24:49.364081 kernel: raid6: int64x1 gen() 5030 MB/s Feb 13 15:24:49.364159 kernel: raid6: using algorithm neonx4 gen() 15728 MB/s Feb 13 15:24:49.380989 kernel: raid6: .... xor() 12358 MB/s, rmw enabled Feb 13 15:24:49.381069 kernel: raid6: using neon recovery algorithm Feb 13 15:24:49.386124 kernel: xor: measuring software checksum speed Feb 13 15:24:49.386209 kernel: 8regs : 18722 MB/sec Feb 13 15:24:49.386230 kernel: 32regs : 21710 MB/sec Feb 13 15:24:49.386248 kernel: arm64_neon : 27851 MB/sec Feb 13 15:24:49.386265 kernel: xor: using function: arm64_neon (27851 MB/sec) Feb 13 15:24:49.436002 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:24:49.451936 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:24:49.458129 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:24:49.473544 systemd-udevd[457]: Using default interface naming scheme 'v255'. Feb 13 15:24:49.478640 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:24:49.485293 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:24:49.502950 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Feb 13 15:24:49.540535 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:24:49.546143 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:24:49.597935 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:24:49.605227 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:24:49.627010 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:24:49.631725 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:24:49.634918 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:24:49.636563 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:24:49.643226 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:24:49.658935 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:24:49.708999 kernel: ACPI: bus type USB registered Feb 13 15:24:49.709069 kernel: usbcore: registered new interface driver usbfs Feb 13 15:24:49.709090 kernel: usbcore: registered new interface driver hub Feb 13 15:24:49.709100 kernel: usbcore: registered new device driver usb Feb 13 15:24:49.720443 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:24:49.733514 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:24:49.733587 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:24:49.770490 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 15:24:49.770647 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 15:24:49.770768 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 15:24:49.770853 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:24:49.770950 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 15:24:49.771056 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 15:24:49.771138 kernel: hub 1-0:1.0: USB hub found Feb 13 15:24:49.771233 kernel: hub 1-0:1.0: 4 ports detected Feb 13 15:24:49.771312 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 15:24:49.771410 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 15:24:49.772095 kernel: hub 2-0:1.0: USB hub found Feb 13 15:24:49.772211 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 15:24:49.772296 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:24:49.772307 kernel: hub 2-0:1.0: 4 ports detected Feb 13 15:24:49.772382 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:24:49.749550 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:24:49.749668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:49.751059 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:24:49.752019 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:24:49.752194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:49.753039 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:49.758327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:49.780929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:49.790278 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 15:24:49.801795 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 15:24:49.801935 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 15:24:49.802074 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 15:24:49.802162 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:24:49.802242 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:24:49.802252 kernel: GPT:17805311 != 80003071 Feb 13 15:24:49.802261 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:24:49.802271 kernel: GPT:17805311 != 80003071 Feb 13 15:24:49.802279 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:24:49.802288 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:24:49.802301 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 15:24:49.789203 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:24:49.808483 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:49.840940 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (516) Feb 13 15:24:49.846965 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (527) Feb 13 15:24:49.868670 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 15:24:49.869362 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 15:24:49.878597 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:24:49.887221 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 15:24:49.895838 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 15:24:49.913224 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:24:49.923580 disk-uuid[579]: Primary Header is updated. Feb 13 15:24:49.923580 disk-uuid[579]: Secondary Entries is updated. Feb 13 15:24:49.923580 disk-uuid[579]: Secondary Header is updated. Feb 13 15:24:49.930970 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:24:50.005273 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 15:24:50.247013 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 15:24:50.382565 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 15:24:50.382628 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 15:24:50.384934 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 15:24:50.438085 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 15:24:50.438298 kernel: usbcore: registered new interface driver usbhid Feb 13 15:24:50.438311 kernel: usbhid: USB HID core driver Feb 13 15:24:50.943040 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:24:50.944257 disk-uuid[580]: The operation has completed successfully. Feb 13 15:24:51.007141 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:24:51.007250 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:24:51.048233 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:24:51.054746 sh[595]: Success Feb 13 15:24:51.068966 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:24:51.129673 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:24:51.145196 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:24:51.145994 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:24:51.164217 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855 Feb 13 15:24:51.164290 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:24:51.164312 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:24:51.165166 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:24:51.165194 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:24:51.170955 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:24:51.173170 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:24:51.174678 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:24:51.182146 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:24:51.188418 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:24:51.208074 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:24:51.208147 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:24:51.208160 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:24:51.212052 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:24:51.212113 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:24:51.221009 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:24:51.220850 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:24:51.227392 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:24:51.235167 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:24:51.302932 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:24:51.317633 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:24:51.337304 ignition[701]: Ignition 2.20.0 Feb 13 15:24:51.337315 ignition[701]: Stage: fetch-offline Feb 13 15:24:51.337358 ignition[701]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:51.337367 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:24:51.338097 ignition[701]: parsed url from cmdline: "" Feb 13 15:24:51.338102 ignition[701]: no config URL provided Feb 13 15:24:51.338109 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:24:51.341201 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:24:51.338120 ignition[701]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:24:51.338126 ignition[701]: failed to fetch config: resource requires networking Feb 13 15:24:51.343458 systemd-networkd[782]: lo: Link UP Feb 13 15:24:51.338320 ignition[701]: Ignition finished successfully Feb 13 15:24:51.343463 systemd-networkd[782]: lo: Gained carrier Feb 13 15:24:51.345727 systemd-networkd[782]: Enumeration completed Feb 13 15:24:51.345839 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:24:51.346784 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:51.346788 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:24:51.347742 systemd[1]: Reached target network.target - Network. Feb 13 15:24:51.347968 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:51.347973 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:24:51.349260 systemd-networkd[782]: eth0: Link UP Feb 13 15:24:51.349264 systemd-networkd[782]: eth0: Gained carrier Feb 13 15:24:51.349272 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:51.356203 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:24:51.357422 systemd-networkd[782]: eth1: Link UP Feb 13 15:24:51.357426 systemd-networkd[782]: eth1: Gained carrier Feb 13 15:24:51.357435 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:51.372812 ignition[787]: Ignition 2.20.0 Feb 13 15:24:51.372822 ignition[787]: Stage: fetch Feb 13 15:24:51.373234 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:51.373246 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:24:51.373363 ignition[787]: parsed url from cmdline: "" Feb 13 15:24:51.373367 ignition[787]: no config URL provided Feb 13 15:24:51.373371 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:24:51.373379 ignition[787]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:24:51.373470 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 15:24:51.374347 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 15:24:51.388015 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:24:51.411010 systemd-networkd[782]: eth0: DHCPv4 address 78.47.85.163/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:24:51.575315 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 15:24:51.583073 ignition[787]: GET result: OK Feb 13 15:24:51.584203 ignition[787]: parsing config with SHA512: bd0f9d0cd9f96a1466d9b0c015fe3e5e307de75d7e88234c625919bffd7b3194afd782bb0ad193c6db9634ce5b8a27a3cb869022f92131422f9f76500db5b6b1 Feb 13 15:24:51.592008 unknown[787]: fetched base config from "system" Feb 13 15:24:51.592021 unknown[787]: fetched base config from "system" Feb 13 15:24:51.592538 ignition[787]: fetch: fetch complete Feb 13 15:24:51.592028 unknown[787]: fetched user config from "hetzner" Feb 13 15:24:51.592550 ignition[787]: fetch: fetch passed Feb 13 15:24:51.595080 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:24:51.592603 ignition[787]: Ignition finished successfully Feb 13 15:24:51.599161 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:24:51.614840 ignition[794]: Ignition 2.20.0 Feb 13 15:24:51.614850 ignition[794]: Stage: kargs Feb 13 15:24:51.615107 ignition[794]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:51.615117 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:24:51.616031 ignition[794]: kargs: kargs passed Feb 13 15:24:51.616085 ignition[794]: Ignition finished successfully Feb 13 15:24:51.617922 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:24:51.631118 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:24:51.643303 ignition[801]: Ignition 2.20.0 Feb 13 15:24:51.643314 ignition[801]: Stage: disks Feb 13 15:24:51.643486 ignition[801]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:51.643495 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:24:51.644479 ignition[801]: disks: disks passed Feb 13 15:24:51.644529 ignition[801]: Ignition finished successfully Feb 13 15:24:51.646433 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:24:51.647398 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:24:51.648564 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:24:51.649187 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:24:51.650332 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:24:51.651450 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:24:51.658279 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:24:51.673819 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:24:51.678846 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:24:52.178175 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:24:52.228098 kernel: EXT4-fs (sda9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none. Feb 13 15:24:52.229285 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:24:52.231045 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:24:52.238096 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:24:52.242081 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:24:52.246224 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:24:52.247837 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:24:52.249416 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:24:52.256874 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (818) Feb 13 15:24:52.256962 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:24:52.256977 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:24:52.256987 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:24:52.258390 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:24:52.266620 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:24:52.272243 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:24:52.272322 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:24:52.277261 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:24:52.325000 coreos-metadata[820]: Feb 13 15:24:52.324 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 15:24:52.328006 coreos-metadata[820]: Feb 13 15:24:52.327 INFO Fetch successful Feb 13 15:24:52.331304 coreos-metadata[820]: Feb 13 15:24:52.330 INFO wrote hostname ci-4230-0-1-9-12db063e25 to /sysroot/etc/hostname Feb 13 15:24:52.332883 initrd-setup-root[845]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:24:52.334399 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:24:52.341250 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:24:52.346634 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:24:52.351118 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:24:52.386380 systemd-networkd[782]: eth1: Gained IPv6LL Feb 13 15:24:52.454141 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:24:52.460077 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:24:52.463140 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:24:52.471950 kernel: BTRFS info (device sda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:24:52.496024 ignition[935]: INFO : Ignition 2.20.0 Feb 13 15:24:52.497032 ignition[935]: INFO : Stage: mount Feb 13 15:24:52.497139 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:24:52.499209 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:52.499209 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:24:52.501271 ignition[935]: INFO : mount: mount passed Feb 13 15:24:52.501271 ignition[935]: INFO : Ignition finished successfully Feb 13 15:24:52.501199 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:24:52.507100 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:24:53.165432 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:24:53.173400 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:24:53.186321 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (947) Feb 13 15:24:53.188158 kernel: BTRFS info (device sda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011 Feb 13 15:24:53.188203 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:24:53.188215 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:24:53.191969 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:24:53.192035 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:24:53.194613 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:24:53.222081 ignition[964]: INFO : Ignition 2.20.0 Feb 13 15:24:53.222081 ignition[964]: INFO : Stage: files Feb 13 15:24:53.224282 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:53.224282 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:24:53.224282 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:24:53.227316 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:24:53.227316 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:24:53.229263 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:24:53.230458 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:24:53.230458 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:24:53.229636 unknown[964]: wrote ssh authorized keys file for user: core Feb 13 15:24:53.232550 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:24:53.232550 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:24:53.282062 systemd-networkd[782]: eth0: Gained IPv6LL Feb 13 15:24:53.314646 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:24:53.525977 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:24:53.525977 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:24:53.525977 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:24:53.525977 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:24:53.530955 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:24:53.530955 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:24:53.530955 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:24:53.530955 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:24:53.530955 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:24:53.530955 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:24:53.530955 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:24:53.530955 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:24:53.530955 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:24:53.530955 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:24:53.530955 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Feb 13 15:24:54.080327 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:24:54.401428 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Feb 13 15:24:54.401428 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:24:54.406130 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:24:54.406130 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:24:54.406130 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:24:54.406130 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:24:54.406130 ignition[964]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:24:54.406130 ignition[964]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:24:54.406130 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:24:54.406130 ignition[964]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:24:54.406130 ignition[964]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:24:54.406130 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:24:54.406130 ignition[964]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:24:54.406130 ignition[964]: INFO : files: files passed Feb 13 15:24:54.406130 ignition[964]: INFO : Ignition finished successfully Feb 13 15:24:54.404820 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:24:54.418209 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:24:54.421180 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:24:54.423512 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:24:54.424239 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:24:54.436944 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:24:54.436944 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:24:54.439336 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:24:54.441448 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:24:54.442721 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:24:54.447152 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:24:54.474428 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:24:54.475678 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:24:54.477551 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:24:54.479195 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:24:54.481367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:24:54.489322 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:24:54.505916 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:24:54.512119 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:24:54.523746 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:24:54.525663 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:24:54.526680 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:24:54.527804 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:24:54.527960 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:24:54.529861 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:24:54.530545 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:24:54.531886 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:24:54.533138 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:24:54.534137 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:24:54.535218 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:24:54.536328 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:24:54.537467 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:24:54.538443 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:24:54.539486 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:24:54.540396 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:24:54.540530 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:24:54.541684 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:24:54.542348 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:24:54.543392 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:24:54.543793 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:24:54.544552 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:24:54.544680 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:24:54.546092 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:24:54.546214 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:24:54.547413 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:24:54.547508 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:24:54.548513 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:24:54.548613 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:24:54.563348 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:24:54.569352 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:24:54.571087 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:24:54.571816 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:24:54.575801 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:24:54.576126 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:24:54.581063 ignition[1016]: INFO : Ignition 2.20.0 Feb 13 15:24:54.581063 ignition[1016]: INFO : Stage: umount Feb 13 15:24:54.581063 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:54.581063 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:24:54.595662 ignition[1016]: INFO : umount: umount passed Feb 13 15:24:54.595662 ignition[1016]: INFO : Ignition finished successfully Feb 13 15:24:54.589368 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:24:54.590116 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:24:54.593737 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:24:54.593844 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:24:54.594954 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:24:54.596218 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:24:54.598308 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:24:54.598374 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:24:54.600285 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:24:54.600342 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:24:54.601455 systemd[1]: Stopped target network.target - Network. Feb 13 15:24:54.602311 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:24:54.602387 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:24:54.603266 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:24:54.605390 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:24:54.605733 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:24:54.612969 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:24:54.614033 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:24:54.614518 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:24:54.614566 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:24:54.616089 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:24:54.616125 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:24:54.617242 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:24:54.617299 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:24:54.618255 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:24:54.618299 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:24:54.620348 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:24:54.621013 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:24:54.632135 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:24:54.640649 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:24:54.640761 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:24:54.644186 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:24:54.644397 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:24:54.645191 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:24:54.650164 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:24:54.650452 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:24:54.650562 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:24:54.653636 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:24:54.654446 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:24:54.655725 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:24:54.655793 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:24:54.661093 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:24:54.662208 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:24:54.662327 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:24:54.664020 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:24:54.664173 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:24:54.665513 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:24:54.665592 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:24:54.666970 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:24:54.667037 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:24:54.670295 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:24:54.674815 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:24:54.674887 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:24:54.686466 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:24:54.686615 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:24:54.689583 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:24:54.690087 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:24:54.691384 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:24:54.691435 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:24:54.692490 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:24:54.692528 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:24:54.694100 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:24:54.694155 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:24:54.695556 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:24:54.695604 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:24:54.697075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:24:54.697126 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:54.706568 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:24:54.707424 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:24:54.707503 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:24:54.708608 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:24:54.708666 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:24:54.709668 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:24:54.709722 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:24:54.711190 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:24:54.711244 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:54.715996 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:24:54.716071 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:24:54.718379 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:24:54.718525 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:24:54.721314 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:24:54.732169 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:24:54.741771 systemd[1]: Switching root. Feb 13 15:24:54.775365 systemd-journald[238]: Journal stopped Feb 13 15:24:55.699522 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 15:24:55.699597 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:24:55.699613 kernel: SELinux: policy capability open_perms=1 Feb 13 15:24:55.699625 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:24:55.699634 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:24:55.699642 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:24:55.699652 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:24:55.699665 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:24:55.699674 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:24:55.699683 systemd[1]: Successfully loaded SELinux policy in 33.219ms. Feb 13 15:24:55.699707 kernel: audit: type=1403 audit(1739460294.883:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:24:55.699719 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.598ms. Feb 13 15:24:55.699735 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:24:55.699745 systemd[1]: Detected virtualization kvm. Feb 13 15:24:55.699759 systemd[1]: Detected architecture arm64. Feb 13 15:24:55.699769 systemd[1]: Detected first boot. Feb 13 15:24:55.699779 systemd[1]: Hostname set to . Feb 13 15:24:55.699789 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:24:55.699799 zram_generator::config[1061]: No configuration found. Feb 13 15:24:55.699813 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:24:55.699822 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:24:55.699833 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:24:55.699843 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:24:55.699853 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:24:55.699863 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:24:55.699873 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:24:55.699883 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:24:55.699894 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:24:55.699952 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:24:55.699966 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:24:55.699977 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:24:55.699987 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:24:55.699997 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:24:55.700007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:24:55.700017 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:24:55.700028 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:24:55.700040 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:24:55.700051 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:24:55.700061 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:24:55.700071 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:24:55.700081 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:24:55.700091 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:24:55.700103 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:24:55.700113 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:24:55.700123 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:24:55.700133 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:24:55.700143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:24:55.700153 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:24:55.700163 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:24:55.700173 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:24:55.700183 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:24:55.700193 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:24:55.700205 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:24:55.700216 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:24:55.700230 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:24:55.700241 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:24:55.700251 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:24:55.700261 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:24:55.700272 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:24:55.700282 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:24:55.700292 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:24:55.700302 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:24:55.700312 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:24:55.700322 systemd[1]: Reached target machines.target - Containers. Feb 13 15:24:55.700332 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:24:55.700342 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:55.700353 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:24:55.700363 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:24:55.700374 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:55.700383 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:24:55.700393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:55.700403 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:24:55.700413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:55.700426 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:24:55.700437 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:24:55.700447 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:24:55.700457 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:24:55.700467 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:24:55.700478 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:24:55.700488 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:24:55.700498 kernel: fuse: init (API version 7.39) Feb 13 15:24:55.700507 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:24:55.700517 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:24:55.700529 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:24:55.700539 kernel: ACPI: bus type drm_connector registered Feb 13 15:24:55.700549 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:24:55.700559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:24:55.700570 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:24:55.700581 systemd[1]: Stopped verity-setup.service. Feb 13 15:24:55.700591 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:24:55.700602 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:24:55.700614 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:24:55.700623 kernel: loop: module loaded Feb 13 15:24:55.700634 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:24:55.700644 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:24:55.700654 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:24:55.700664 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:24:55.700674 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:24:55.700685 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:24:55.700695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:55.700705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:55.700715 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:24:55.700726 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:24:55.700736 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:55.700746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:55.700757 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:24:55.700767 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:24:55.700777 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:55.700789 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:55.700799 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:24:55.700811 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:24:55.700822 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:24:55.700832 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:24:55.700842 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:24:55.700852 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:24:55.700890 systemd-journald[1125]: Collecting audit messages is disabled. Feb 13 15:24:55.702982 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:24:55.703017 systemd-journald[1125]: Journal started Feb 13 15:24:55.703044 systemd-journald[1125]: Runtime Journal (/run/log/journal/c2551dacca5b411e9ebb23e8cc4a714e) is 8M, max 76.6M, 68.6M free. Feb 13 15:24:55.411551 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:24:55.424083 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:24:55.424781 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:24:55.713016 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:24:55.718529 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:24:55.718579 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:55.734003 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:24:55.736742 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:24:55.744121 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:24:55.744188 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:24:55.749048 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:24:55.753940 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:24:55.759049 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:24:55.762937 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:24:55.764476 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:24:55.766000 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:24:55.767272 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:24:55.775245 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:24:55.776327 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:24:55.791945 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 15:24:55.784955 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:24:55.800357 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:24:55.811449 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:24:55.829191 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:24:55.836894 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:24:55.838621 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:24:55.849941 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:24:55.852397 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:24:55.857562 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:24:55.862544 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:24:55.869175 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Feb 13 15:24:55.875647 systemd-journald[1125]: Time spent on flushing to /var/log/journal/c2551dacca5b411e9ebb23e8cc4a714e is 54.580ms for 1150 entries. Feb 13 15:24:55.875647 systemd-journald[1125]: System Journal (/var/log/journal/c2551dacca5b411e9ebb23e8cc4a714e) is 8M, max 584.8M, 576.8M free. Feb 13 15:24:55.941670 systemd-journald[1125]: Received client request to flush runtime journal. Feb 13 15:24:55.941723 kernel: loop1: detected capacity change from 0 to 8 Feb 13 15:24:55.941741 kernel: loop2: detected capacity change from 0 to 194096 Feb 13 15:24:55.869198 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Feb 13 15:24:55.881367 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:24:55.890128 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:24:55.909201 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:24:55.947102 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:24:55.955540 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:24:55.963957 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:24:55.974168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:24:55.986145 kernel: loop3: detected capacity change from 0 to 113512 Feb 13 15:24:56.014173 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Feb 13 15:24:56.014189 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Feb 13 15:24:56.024987 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:24:56.029103 kernel: loop4: detected capacity change from 0 to 123192 Feb 13 15:24:56.046768 kernel: loop5: detected capacity change from 0 to 8 Feb 13 15:24:56.050723 kernel: loop6: detected capacity change from 0 to 194096 Feb 13 15:24:56.072977 kernel: loop7: detected capacity change from 0 to 113512 Feb 13 15:24:56.086997 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 15:24:56.089764 (sd-merge)[1210]: Merged extensions into '/usr'. Feb 13 15:24:56.096036 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:24:56.096056 systemd[1]: Reloading... Feb 13 15:24:56.193247 zram_generator::config[1234]: No configuration found. Feb 13 15:24:56.369931 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:24:56.393551 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:24:56.455513 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:24:56.455734 systemd[1]: Reloading finished in 358 ms. Feb 13 15:24:56.478632 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:24:56.480960 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:24:56.494308 systemd[1]: Starting ensure-sysext.service... Feb 13 15:24:56.506641 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:24:56.519094 systemd[1]: Reload requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:24:56.519113 systemd[1]: Reloading... Feb 13 15:24:56.530410 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:24:56.531025 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:24:56.531650 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:24:56.531850 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Feb 13 15:24:56.531896 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Feb 13 15:24:56.536194 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:24:56.536331 systemd-tmpfiles[1276]: Skipping /boot Feb 13 15:24:56.545719 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:24:56.545865 systemd-tmpfiles[1276]: Skipping /boot Feb 13 15:24:56.596946 zram_generator::config[1305]: No configuration found. Feb 13 15:24:56.701104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:24:56.763397 systemd[1]: Reloading finished in 243 ms. Feb 13 15:24:56.777971 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:24:56.791239 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:24:56.800210 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:24:56.805023 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:24:56.809997 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:24:56.814951 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:24:56.817196 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:24:56.823362 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:24:56.827735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:56.831236 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:56.837505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:56.841392 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:56.844124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:56.844272 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:24:56.862409 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:24:56.865032 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:24:56.867657 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:56.867836 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:56.880477 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:24:56.883663 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:56.889254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:56.891090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:56.891238 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:24:56.891815 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:56.893952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:56.899813 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:24:56.905015 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:56.905237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:56.912020 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:24:56.912271 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:24:56.917193 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:24:56.921570 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:24:56.929663 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:56.930420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:56.939114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:56.947172 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:56.952164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:24:56.956440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:56.957228 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:56.957272 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:24:56.957326 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:24:56.959976 systemd[1]: Finished ensure-sysext.service. Feb 13 15:24:56.962327 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Feb 13 15:24:56.964706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:56.965633 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:56.967886 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:56.969760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:56.971718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:24:56.971815 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:24:56.980271 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:24:56.988326 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:24:56.991071 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:24:56.993266 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:24:57.000490 augenrules[1396]: No rules Feb 13 15:24:57.007577 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:24:57.007870 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:24:57.013034 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:24:57.023697 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:24:57.142065 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:24:57.150484 systemd-resolved[1348]: Positive Trust Anchors: Feb 13 15:24:57.152739 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:24:57.153035 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:24:57.153073 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:24:57.154633 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:24:57.166703 systemd-resolved[1348]: Using system hostname 'ci-4230-0-1-9-12db063e25'. Feb 13 15:24:57.168348 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:24:57.170121 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:24:57.187104 systemd-networkd[1409]: lo: Link UP Feb 13 15:24:57.187119 systemd-networkd[1409]: lo: Gained carrier Feb 13 15:24:57.188982 systemd-networkd[1409]: Enumeration completed Feb 13 15:24:57.189102 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:24:57.190552 systemd[1]: Reached target network.target - Network. Feb 13 15:24:57.193133 systemd-networkd[1409]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:57.193144 systemd-networkd[1409]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:24:57.197310 systemd-networkd[1409]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:57.197360 systemd-networkd[1409]: eth1: Link UP Feb 13 15:24:57.197363 systemd-networkd[1409]: eth1: Gained carrier Feb 13 15:24:57.197372 systemd-networkd[1409]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:57.198166 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:24:57.202181 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:24:57.226037 systemd-networkd[1409]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:24:57.226679 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Feb 13 15:24:57.229189 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:24:57.235022 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:57.235032 systemd-networkd[1409]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:24:57.236347 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Feb 13 15:24:57.236535 systemd-networkd[1409]: eth0: Link UP Feb 13 15:24:57.236539 systemd-networkd[1409]: eth0: Gained carrier Feb 13 15:24:57.236559 systemd-networkd[1409]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:57.241378 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Feb 13 15:24:57.292987 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1423) Feb 13 15:24:57.295972 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:24:57.297154 systemd-networkd[1409]: eth0: DHCPv4 address 78.47.85.163/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:24:57.297844 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Feb 13 15:24:57.298689 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Feb 13 15:24:57.350713 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 15:24:57.350827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:57.364379 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:57.368982 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:57.376227 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:57.376869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:57.376955 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:24:57.376988 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:24:57.377367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:57.377539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:57.382034 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:57.382982 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:57.397295 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 15:24:57.397371 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 15:24:57.397409 kernel: [drm] features: -context_init Feb 13 15:24:57.397658 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:24:57.399513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:57.399894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:57.405165 kernel: [drm] number of scanouts: 1 Feb 13 15:24:57.407961 kernel: [drm] number of cap sets: 0 Feb 13 15:24:57.409980 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 15:24:57.410327 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:24:57.412059 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:24:57.412125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:24:57.423020 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:24:57.437461 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 15:24:57.445364 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:24:57.468455 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:57.480526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:24:57.480828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:57.487215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:57.551308 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:57.594294 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:24:57.604851 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:24:57.621526 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:24:57.646228 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:24:57.650339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:24:57.651238 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:24:57.652128 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:24:57.653065 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:24:57.654151 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:24:57.654962 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:24:57.655619 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:24:57.656334 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:24:57.656368 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:24:57.656845 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:24:57.659175 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:24:57.661703 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:24:57.665259 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:24:57.666220 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:24:57.666857 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:24:57.670039 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:24:57.671752 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:24:57.680213 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:24:57.682881 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:24:57.684387 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:24:57.685108 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:24:57.685643 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:24:57.685678 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:24:57.692189 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:24:57.697825 lvm[1476]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:24:57.701127 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:24:57.705152 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:24:57.716324 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:24:57.719492 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:24:57.720612 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:24:57.723240 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:24:57.727274 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:24:57.731148 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 15:24:57.738772 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:24:57.742628 jq[1480]: false Feb 13 15:24:57.743168 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:24:57.747455 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:24:57.749899 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:24:57.750512 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:24:57.753141 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:24:57.758955 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:24:57.760423 dbus-daemon[1479]: [system] SELinux support is enabled Feb 13 15:24:57.764009 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:24:57.764958 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:24:57.777122 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:24:57.778875 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:24:57.787299 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:24:57.787350 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:24:57.788154 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:24:57.788179 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:24:57.800405 coreos-metadata[1478]: Feb 13 15:24:57.800 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 15:24:57.806147 coreos-metadata[1478]: Feb 13 15:24:57.805 INFO Fetch successful Feb 13 15:24:57.806147 coreos-metadata[1478]: Feb 13 15:24:57.805 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 15:24:57.806950 coreos-metadata[1478]: Feb 13 15:24:57.806 INFO Fetch successful Feb 13 15:24:57.816875 jq[1492]: true Feb 13 15:24:57.833349 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:24:57.835569 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:24:57.843449 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:24:57.844935 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:24:57.852939 extend-filesystems[1481]: Found loop4 Feb 13 15:24:57.852939 extend-filesystems[1481]: Found loop5 Feb 13 15:24:57.852939 extend-filesystems[1481]: Found loop6 Feb 13 15:24:57.852939 extend-filesystems[1481]: Found loop7 Feb 13 15:24:57.852939 extend-filesystems[1481]: Found sda Feb 13 15:24:57.852939 extend-filesystems[1481]: Found sda1 Feb 13 15:24:57.852939 extend-filesystems[1481]: Found sda2 Feb 13 15:24:57.852939 extend-filesystems[1481]: Found sda3 Feb 13 15:24:57.852939 extend-filesystems[1481]: Found usr Feb 13 15:24:57.852939 extend-filesystems[1481]: Found sda4 Feb 13 15:24:57.852939 extend-filesystems[1481]: Found sda6 Feb 13 15:24:57.852939 extend-filesystems[1481]: Found sda7 Feb 13 15:24:57.852939 extend-filesystems[1481]: Found sda9 Feb 13 15:24:57.852939 extend-filesystems[1481]: Checking size of /dev/sda9 Feb 13 15:24:57.916101 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 15:24:57.916183 tar[1495]: linux-arm64/helm Feb 13 15:24:57.916395 update_engine[1491]: I20250213 15:24:57.855459 1491 main.cc:92] Flatcar Update Engine starting Feb 13 15:24:57.916395 update_engine[1491]: I20250213 15:24:57.870605 1491 update_check_scheduler.cc:74] Next update check in 7m30s Feb 13 15:24:57.858227 (ntainerd)[1510]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:24:57.916699 extend-filesystems[1481]: Resized partition /dev/sda9 Feb 13 15:24:57.867773 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:24:57.921735 extend-filesystems[1523]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:24:57.926139 jq[1511]: true Feb 13 15:24:57.882268 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:24:57.891938 systemd-logind[1489]: New seat seat0. Feb 13 15:24:57.906240 systemd-logind[1489]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:24:57.906257 systemd-logind[1489]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 15:24:57.906499 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:24:57.967138 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:24:57.968274 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:24:58.061241 bash[1548]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:24:58.064636 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:24:58.078004 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1420) Feb 13 15:24:58.085940 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 15:24:58.098591 systemd[1]: Starting sshkeys.service... Feb 13 15:24:58.133660 extend-filesystems[1523]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:24:58.133660 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 15:24:58.133660 extend-filesystems[1523]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 15:24:58.141312 extend-filesystems[1481]: Resized filesystem in /dev/sda9 Feb 13 15:24:58.141312 extend-filesystems[1481]: Found sr0 Feb 13 15:24:58.135298 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:24:58.135510 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:24:58.145269 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:24:58.201356 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:24:58.226120 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:24:58.235649 coreos-metadata[1560]: Feb 13 15:24:58.235 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 15:24:58.239924 coreos-metadata[1560]: Feb 13 15:24:58.238 INFO Fetch successful Feb 13 15:24:58.246527 unknown[1560]: wrote ssh authorized keys file for user: core Feb 13 15:24:58.280474 update-ssh-keys[1564]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:24:58.279331 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:24:58.287022 systemd[1]: Finished sshkeys.service. Feb 13 15:24:58.298631 containerd[1510]: time="2025-02-13T15:24:58.298517280Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:24:58.381299 containerd[1510]: time="2025-02-13T15:24:58.380735320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:58.386760 containerd[1510]: time="2025-02-13T15:24:58.384342560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:58.386760 containerd[1510]: time="2025-02-13T15:24:58.384390320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:24:58.386760 containerd[1510]: time="2025-02-13T15:24:58.384409800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:24:58.386760 containerd[1510]: time="2025-02-13T15:24:58.384590440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:24:58.386760 containerd[1510]: time="2025-02-13T15:24:58.384608240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:58.386760 containerd[1510]: time="2025-02-13T15:24:58.384677680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:58.386760 containerd[1510]: time="2025-02-13T15:24:58.384689640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:58.386760 containerd[1510]: time="2025-02-13T15:24:58.384896200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:58.386760 containerd[1510]: time="2025-02-13T15:24:58.384975880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:58.386760 containerd[1510]: time="2025-02-13T15:24:58.384991200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:58.386760 containerd[1510]: time="2025-02-13T15:24:58.385000440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:58.387112 containerd[1510]: time="2025-02-13T15:24:58.385230960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:58.387112 containerd[1510]: time="2025-02-13T15:24:58.385464800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:58.387112 containerd[1510]: time="2025-02-13T15:24:58.385605680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:58.387112 containerd[1510]: time="2025-02-13T15:24:58.385618400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:24:58.387112 containerd[1510]: time="2025-02-13T15:24:58.385690880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:24:58.387112 containerd[1510]: time="2025-02-13T15:24:58.385736120Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:24:58.394254 containerd[1510]: time="2025-02-13T15:24:58.394207280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:24:58.394617 containerd[1510]: time="2025-02-13T15:24:58.394595600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:24:58.395240 containerd[1510]: time="2025-02-13T15:24:58.395205960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:24:58.395345 containerd[1510]: time="2025-02-13T15:24:58.395329920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:24:58.395419 containerd[1510]: time="2025-02-13T15:24:58.395407120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:24:58.396145 containerd[1510]: time="2025-02-13T15:24:58.396118880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:24:58.397100 containerd[1510]: time="2025-02-13T15:24:58.397077240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:24:58.397693 containerd[1510]: time="2025-02-13T15:24:58.397667760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:24:58.398092 containerd[1510]: time="2025-02-13T15:24:58.398070400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:24:58.398182 containerd[1510]: time="2025-02-13T15:24:58.398167120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:24:58.398249 containerd[1510]: time="2025-02-13T15:24:58.398237440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:24:58.398307 containerd[1510]: time="2025-02-13T15:24:58.398290320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:24:58.398384 containerd[1510]: time="2025-02-13T15:24:58.398370280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:24:58.398509 containerd[1510]: time="2025-02-13T15:24:58.398493200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:24:58.398795 containerd[1510]: time="2025-02-13T15:24:58.398777720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:24:58.398893 containerd[1510]: time="2025-02-13T15:24:58.398879880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:24:58.398980 containerd[1510]: time="2025-02-13T15:24:58.398966960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:24:58.399099 containerd[1510]: time="2025-02-13T15:24:58.399083240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:24:58.399281 containerd[1510]: time="2025-02-13T15:24:58.399265600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.399447 containerd[1510]: time="2025-02-13T15:24:58.399431480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.399664 containerd[1510]: time="2025-02-13T15:24:58.399507160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.399664 containerd[1510]: time="2025-02-13T15:24:58.399525720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.399664 containerd[1510]: time="2025-02-13T15:24:58.399538480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.399664 containerd[1510]: time="2025-02-13T15:24:58.399551680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.400749 containerd[1510]: time="2025-02-13T15:24:58.400014120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.400749 containerd[1510]: time="2025-02-13T15:24:58.400046880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.400749 containerd[1510]: time="2025-02-13T15:24:58.400062680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.400749 containerd[1510]: time="2025-02-13T15:24:58.400078520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.400749 containerd[1510]: time="2025-02-13T15:24:58.400109320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.400749 containerd[1510]: time="2025-02-13T15:24:58.400132840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.400749 containerd[1510]: time="2025-02-13T15:24:58.400147520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.401011 containerd[1510]: time="2025-02-13T15:24:58.400991240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:24:58.401322 containerd[1510]: time="2025-02-13T15:24:58.401104360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.401322 containerd[1510]: time="2025-02-13T15:24:58.401129680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.401322 containerd[1510]: time="2025-02-13T15:24:58.401152800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:24:58.401464 containerd[1510]: time="2025-02-13T15:24:58.401450400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:24:58.403140 containerd[1510]: time="2025-02-13T15:24:58.401758160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:24:58.403140 containerd[1510]: time="2025-02-13T15:24:58.401784600Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:24:58.403140 containerd[1510]: time="2025-02-13T15:24:58.401798360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:24:58.403140 containerd[1510]: time="2025-02-13T15:24:58.401807200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.403140 containerd[1510]: time="2025-02-13T15:24:58.401824080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:24:58.403140 containerd[1510]: time="2025-02-13T15:24:58.401834560Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:24:58.403140 containerd[1510]: time="2025-02-13T15:24:58.401845200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:24:58.402108 systemd-networkd[1409]: eth1: Gained IPv6LL Feb 13 15:24:58.402651 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Feb 13 15:24:58.405348 containerd[1510]: time="2025-02-13T15:24:58.404104400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:24:58.405348 containerd[1510]: time="2025-02-13T15:24:58.404166160Z" level=info msg="Connect containerd service" Feb 13 15:24:58.405348 containerd[1510]: time="2025-02-13T15:24:58.404216120Z" level=info msg="using legacy CRI server" Feb 13 15:24:58.405348 containerd[1510]: time="2025-02-13T15:24:58.404224400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:24:58.405348 containerd[1510]: time="2025-02-13T15:24:58.404463680Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:24:58.407366 containerd[1510]: time="2025-02-13T15:24:58.407216520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:24:58.408575 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:24:58.409897 containerd[1510]: time="2025-02-13T15:24:58.409758320Z" level=info msg="Start subscribing containerd event" Feb 13 15:24:58.413887 containerd[1510]: time="2025-02-13T15:24:58.410294480Z" level=info msg="Start recovering state" Feb 13 15:24:58.413887 containerd[1510]: time="2025-02-13T15:24:58.410393200Z" level=info msg="Start event monitor" Feb 13 15:24:58.413887 containerd[1510]: time="2025-02-13T15:24:58.410407440Z" level=info msg="Start snapshots syncer" Feb 13 15:24:58.413887 containerd[1510]: time="2025-02-13T15:24:58.410417760Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:24:58.413887 containerd[1510]: time="2025-02-13T15:24:58.410425640Z" level=info msg="Start streaming server" Feb 13 15:24:58.412250 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:24:58.415796 containerd[1510]: time="2025-02-13T15:24:58.415344080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:24:58.416250 containerd[1510]: time="2025-02-13T15:24:58.416225560Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:24:58.416385 containerd[1510]: time="2025-02-13T15:24:58.416371840Z" level=info msg="containerd successfully booted in 0.123110s" Feb 13 15:24:58.420227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:24:58.423360 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:24:58.426170 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:24:58.484825 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:24:58.704020 tar[1495]: linux-arm64/LICENSE Feb 13 15:24:58.704254 tar[1495]: linux-arm64/README.md Feb 13 15:24:58.729381 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:24:58.786057 systemd-networkd[1409]: eth0: Gained IPv6LL Feb 13 15:24:58.787502 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection. Feb 13 15:24:59.203652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:24:59.216101 (kubelet)[1590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:24:59.435019 sshd_keygen[1519]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:24:59.462426 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:24:59.473053 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:24:59.481076 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:24:59.481293 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:24:59.490307 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:24:59.503072 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:24:59.514682 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:24:59.523269 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:24:59.524084 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:24:59.524891 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:24:59.529610 systemd[1]: Startup finished in 770ms (kernel) + 6.205s (initrd) + 4.679s (userspace) = 11.654s. Feb 13 15:24:59.838056 kubelet[1590]: E0213 15:24:59.837823 1590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:24:59.842225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:24:59.842428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:24:59.843053 systemd[1]: kubelet.service: Consumed 861ms CPU time, 241.9M memory peak. Feb 13 15:25:09.936033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:25:09.943165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:10.049727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:10.054523 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:25:10.105717 kubelet[1628]: E0213 15:25:10.105609 1628 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:25:10.108486 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:25:10.108632 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:25:10.109004 systemd[1]: kubelet.service: Consumed 146ms CPU time, 94.7M memory peak. Feb 13 15:25:20.186871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:25:20.196274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:20.292697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:20.297107 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:25:20.352442 kubelet[1644]: E0213 15:25:20.352381 1644 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:25:20.354953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:25:20.355102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:25:20.355591 systemd[1]: kubelet.service: Consumed 143ms CPU time, 97.1M memory peak. Feb 13 15:25:28.999620 systemd-timesyncd[1389]: Contacted time server 217.144.138.234:123 (2.flatcar.pool.ntp.org). Feb 13 15:25:28.999701 systemd-timesyncd[1389]: Initial clock synchronization to Thu 2025-02-13 15:25:28.964049 UTC. Feb 13 15:25:30.437368 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:25:30.444191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:30.559490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:30.570500 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:25:30.629740 kubelet[1659]: E0213 15:25:30.629642 1659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:25:30.634979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:25:30.635309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:25:30.636248 systemd[1]: kubelet.service: Consumed 163ms CPU time, 96.6M memory peak. Feb 13 15:25:40.686397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:25:40.697399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:40.795006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:40.799114 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:25:40.844568 kubelet[1675]: E0213 15:25:40.844499 1675 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:25:40.848463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:25:40.848844 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:25:40.849489 systemd[1]: kubelet.service: Consumed 137ms CPU time, 94.8M memory peak. Feb 13 15:25:42.921014 update_engine[1491]: I20250213 15:25:42.920501 1491 update_attempter.cc:509] Updating boot flags... Feb 13 15:25:42.972983 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1693) Feb 13 15:25:43.040928 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1689) Feb 13 15:25:43.101960 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1689) Feb 13 15:25:44.989812 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:25:45.001333 systemd[1]: Started sshd@0-78.47.85.163:22-139.178.68.195:45672.service - OpenSSH per-connection server daemon (139.178.68.195:45672). Feb 13 15:25:45.998077 sshd[1706]: Accepted publickey for core from 139.178.68.195 port 45672 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:25:46.001145 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:25:46.009656 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:25:46.018405 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:25:46.029893 systemd-logind[1489]: New session 1 of user core. Feb 13 15:25:46.036967 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:25:46.044444 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:25:46.049481 (systemd)[1710]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:25:46.052682 systemd-logind[1489]: New session c1 of user core. Feb 13 15:25:46.184528 systemd[1710]: Queued start job for default target default.target. Feb 13 15:25:46.195216 systemd[1710]: Created slice app.slice - User Application Slice. Feb 13 15:25:46.195286 systemd[1710]: Reached target paths.target - Paths. Feb 13 15:25:46.195368 systemd[1710]: Reached target timers.target - Timers. Feb 13 15:25:46.198344 systemd[1710]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:25:46.213613 systemd[1710]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:25:46.214511 systemd[1710]: Reached target sockets.target - Sockets. Feb 13 15:25:46.214577 systemd[1710]: Reached target basic.target - Basic System. Feb 13 15:25:46.214615 systemd[1710]: Reached target default.target - Main User Target. Feb 13 15:25:46.214643 systemd[1710]: Startup finished in 153ms. Feb 13 15:25:46.215345 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:25:46.225594 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:25:46.926248 systemd[1]: Started sshd@1-78.47.85.163:22-139.178.68.195:53884.service - OpenSSH per-connection server daemon (139.178.68.195:53884). Feb 13 15:25:47.920488 sshd[1721]: Accepted publickey for core from 139.178.68.195 port 53884 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:25:47.924670 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:25:47.935232 systemd-logind[1489]: New session 2 of user core. Feb 13 15:25:47.944391 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:25:48.604490 sshd[1723]: Connection closed by 139.178.68.195 port 53884 Feb 13 15:25:48.605447 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Feb 13 15:25:48.610850 systemd-logind[1489]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:25:48.611617 systemd[1]: sshd@1-78.47.85.163:22-139.178.68.195:53884.service: Deactivated successfully. Feb 13 15:25:48.614515 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:25:48.617131 systemd-logind[1489]: Removed session 2. Feb 13 15:25:48.789372 systemd[1]: Started sshd@2-78.47.85.163:22-139.178.68.195:53890.service - OpenSSH per-connection server daemon (139.178.68.195:53890). Feb 13 15:25:49.777632 sshd[1729]: Accepted publickey for core from 139.178.68.195 port 53890 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:25:49.779707 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:25:49.786276 systemd-logind[1489]: New session 3 of user core. Feb 13 15:25:49.796413 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:25:50.459091 sshd[1731]: Connection closed by 139.178.68.195 port 53890 Feb 13 15:25:50.460167 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Feb 13 15:25:50.466420 systemd[1]: sshd@2-78.47.85.163:22-139.178.68.195:53890.service: Deactivated successfully. Feb 13 15:25:50.470584 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:25:50.471787 systemd-logind[1489]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:25:50.473383 systemd-logind[1489]: Removed session 3. Feb 13 15:25:50.641323 systemd[1]: Started sshd@3-78.47.85.163:22-139.178.68.195:53894.service - OpenSSH per-connection server daemon (139.178.68.195:53894). Feb 13 15:25:50.938040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:25:50.952446 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:51.075755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:51.083300 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:25:51.133485 kubelet[1746]: E0213 15:25:51.133388 1746 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:25:51.138227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:25:51.138486 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:25:51.139470 systemd[1]: kubelet.service: Consumed 152ms CPU time, 94.4M memory peak. Feb 13 15:25:51.629130 sshd[1737]: Accepted publickey for core from 139.178.68.195 port 53894 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:25:51.630632 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:25:51.636748 systemd-logind[1489]: New session 4 of user core. Feb 13 15:25:51.642312 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:25:52.309845 sshd[1755]: Connection closed by 139.178.68.195 port 53894 Feb 13 15:25:52.309612 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Feb 13 15:25:52.314735 systemd[1]: sshd@3-78.47.85.163:22-139.178.68.195:53894.service: Deactivated successfully. Feb 13 15:25:52.316689 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:25:52.318970 systemd-logind[1489]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:25:52.320227 systemd-logind[1489]: Removed session 4. Feb 13 15:25:52.482729 systemd[1]: Started sshd@4-78.47.85.163:22-139.178.68.195:53910.service - OpenSSH per-connection server daemon (139.178.68.195:53910). Feb 13 15:25:53.487610 sshd[1761]: Accepted publickey for core from 139.178.68.195 port 53910 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:25:53.489588 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:25:53.495065 systemd-logind[1489]: New session 5 of user core. Feb 13 15:25:53.506284 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:25:54.023834 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:25:54.024134 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:25:54.045575 sudo[1764]: pam_unix(sudo:session): session closed for user root Feb 13 15:25:54.208943 sshd[1763]: Connection closed by 139.178.68.195 port 53910 Feb 13 15:25:54.208075 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Feb 13 15:25:54.213860 systemd[1]: sshd@4-78.47.85.163:22-139.178.68.195:53910.service: Deactivated successfully. Feb 13 15:25:54.216292 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:25:54.217698 systemd-logind[1489]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:25:54.219174 systemd-logind[1489]: Removed session 5. Feb 13 15:25:54.391364 systemd[1]: Started sshd@5-78.47.85.163:22-139.178.68.195:53920.service - OpenSSH per-connection server daemon (139.178.68.195:53920). Feb 13 15:25:55.382051 sshd[1770]: Accepted publickey for core from 139.178.68.195 port 53920 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:25:55.384164 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:25:55.390110 systemd-logind[1489]: New session 6 of user core. Feb 13 15:25:55.395396 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:25:55.905619 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:25:55.905994 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:25:55.910123 sudo[1774]: pam_unix(sudo:session): session closed for user root Feb 13 15:25:55.916844 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:25:55.917695 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:25:55.934949 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:25:55.984597 augenrules[1796]: No rules Feb 13 15:25:55.985634 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:25:55.985852 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:25:55.987158 sudo[1773]: pam_unix(sudo:session): session closed for user root Feb 13 15:25:56.148131 sshd[1772]: Connection closed by 139.178.68.195 port 53920 Feb 13 15:25:56.149143 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Feb 13 15:25:56.153922 systemd-logind[1489]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:25:56.154198 systemd[1]: sshd@5-78.47.85.163:22-139.178.68.195:53920.service: Deactivated successfully. Feb 13 15:25:56.155806 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:25:56.158361 systemd-logind[1489]: Removed session 6. Feb 13 15:25:56.321156 systemd[1]: Started sshd@6-78.47.85.163:22-139.178.68.195:53922.service - OpenSSH per-connection server daemon (139.178.68.195:53922). Feb 13 15:25:57.324278 sshd[1805]: Accepted publickey for core from 139.178.68.195 port 53922 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:25:57.326361 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:25:57.333792 systemd-logind[1489]: New session 7 of user core. Feb 13 15:25:57.347304 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:25:57.843891 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:25:57.844199 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:25:58.189529 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:25:58.190516 (dockerd)[1824]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:25:58.440993 dockerd[1824]: time="2025-02-13T15:25:58.438418359Z" level=info msg="Starting up" Feb 13 15:25:58.521041 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3722757026-merged.mount: Deactivated successfully. Feb 13 15:25:58.542979 dockerd[1824]: time="2025-02-13T15:25:58.542932883Z" level=info msg="Loading containers: start." Feb 13 15:25:58.723022 kernel: Initializing XFRM netlink socket Feb 13 15:25:58.808582 systemd-networkd[1409]: docker0: Link UP Feb 13 15:25:58.833349 dockerd[1824]: time="2025-02-13T15:25:58.833278716Z" level=info msg="Loading containers: done." Feb 13 15:25:58.848412 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3051919962-merged.mount: Deactivated successfully. Feb 13 15:25:58.849158 dockerd[1824]: time="2025-02-13T15:25:58.849007639Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:25:58.849158 dockerd[1824]: time="2025-02-13T15:25:58.849113363Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:25:58.849752 dockerd[1824]: time="2025-02-13T15:25:58.849293741Z" level=info msg="Daemon has completed initialization" Feb 13 15:25:58.883493 dockerd[1824]: time="2025-02-13T15:25:58.883419559Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:25:58.884047 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:26:00.010974 containerd[1510]: time="2025-02-13T15:26:00.010668292Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:26:00.699653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4140641168.mount: Deactivated successfully. Feb 13 15:26:01.185823 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:26:01.194258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:01.308000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:01.313529 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:01.365299 kubelet[2076]: E0213 15:26:01.365201 2076 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:01.368344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:01.368746 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:26:01.369296 systemd[1]: kubelet.service: Consumed 149ms CPU time, 96.1M memory peak. Feb 13 15:26:03.447425 containerd[1510]: time="2025-02-13T15:26:03.445898673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:03.448339 containerd[1510]: time="2025-02-13T15:26:03.448285244Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=29865299" Feb 13 15:26:03.448768 containerd[1510]: time="2025-02-13T15:26:03.448738173Z" level=info msg="ImageCreate event name:\"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:03.455482 containerd[1510]: time="2025-02-13T15:26:03.455432482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:03.456539 containerd[1510]: time="2025-02-13T15:26:03.456507257Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"29862007\" in 3.445798016s" Feb 13 15:26:03.456649 containerd[1510]: time="2025-02-13T15:26:03.456633066Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:deaeae5e8513d8c5921aee5b515f0fc2ac63b71dfe965318f71eb49468e74a4f\"" Feb 13 15:26:03.482821 containerd[1510]: time="2025-02-13T15:26:03.482737268Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:26:05.775529 containerd[1510]: time="2025-02-13T15:26:05.775466087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:05.777324 containerd[1510]: time="2025-02-13T15:26:05.777274655Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=26898614" Feb 13 15:26:05.777976 containerd[1510]: time="2025-02-13T15:26:05.777722918Z" level=info msg="ImageCreate event name:\"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:05.781387 containerd[1510]: time="2025-02-13T15:26:05.781318658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:05.782734 containerd[1510]: time="2025-02-13T15:26:05.782618976Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"28302323\" in 2.299590899s" Feb 13 15:26:05.782734 containerd[1510]: time="2025-02-13T15:26:05.782653409Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:e31753dd49b05da8fcb7deb26f2a5942a6747a0e6d4492f3dc8544123b97a3a2\"" Feb 13 15:26:05.810493 containerd[1510]: time="2025-02-13T15:26:05.810212675Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:26:07.497382 containerd[1510]: time="2025-02-13T15:26:07.496324941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:07.497382 containerd[1510]: time="2025-02-13T15:26:07.497339508Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=16164954" Feb 13 15:26:07.497917 containerd[1510]: time="2025-02-13T15:26:07.497880805Z" level=info msg="ImageCreate event name:\"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:07.505442 containerd[1510]: time="2025-02-13T15:26:07.505377936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:07.506248 containerd[1510]: time="2025-02-13T15:26:07.506216737Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"17568681\" in 1.69596347s" Feb 13 15:26:07.506343 containerd[1510]: time="2025-02-13T15:26:07.506328155Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:ea60c047fad7c01bf50f1f0259a4aeea2cc4401850d5a95802cc1d07d9021eb4\"" Feb 13 15:26:07.532116 containerd[1510]: time="2025-02-13T15:26:07.532074531Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:26:08.791546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2070586921.mount: Deactivated successfully. Feb 13 15:26:09.124852 containerd[1510]: time="2025-02-13T15:26:09.123697105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:09.139127 containerd[1510]: time="2025-02-13T15:26:09.139029577Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663396" Feb 13 15:26:09.144075 containerd[1510]: time="2025-02-13T15:26:09.144025021Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:09.147106 containerd[1510]: time="2025-02-13T15:26:09.147052074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:09.147686 containerd[1510]: time="2025-02-13T15:26:09.147643415Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.615305575s" Feb 13 15:26:09.147686 containerd[1510]: time="2025-02-13T15:26:09.147683768Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\"" Feb 13 15:26:09.178954 containerd[1510]: time="2025-02-13T15:26:09.178901862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:26:09.795357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286513153.mount: Deactivated successfully. Feb 13 15:26:11.141654 containerd[1510]: time="2025-02-13T15:26:11.141571371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:11.143053 containerd[1510]: time="2025-02-13T15:26:11.143000121Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Feb 13 15:26:11.143956 containerd[1510]: time="2025-02-13T15:26:11.143795844Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:11.147897 containerd[1510]: time="2025-02-13T15:26:11.147804334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:11.149949 containerd[1510]: time="2025-02-13T15:26:11.149749368Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.970637741s" Feb 13 15:26:11.149949 containerd[1510]: time="2025-02-13T15:26:11.149808999Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:26:11.173050 containerd[1510]: time="2025-02-13T15:26:11.172809214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:26:11.436572 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:26:11.446429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:11.557146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:11.559437 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:11.612027 kubelet[2177]: E0213 15:26:11.611964 2177 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:11.615331 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:11.615659 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:26:11.616250 systemd[1]: kubelet.service: Consumed 139ms CPU time, 94.1M memory peak. Feb 13 15:26:11.751310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2218574146.mount: Deactivated successfully. Feb 13 15:26:11.757544 containerd[1510]: time="2025-02-13T15:26:11.757465301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:11.759009 containerd[1510]: time="2025-02-13T15:26:11.758943004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Feb 13 15:26:11.759990 containerd[1510]: time="2025-02-13T15:26:11.759353623Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:11.761875 containerd[1510]: time="2025-02-13T15:26:11.761820140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:11.763946 containerd[1510]: time="2025-02-13T15:26:11.762715528Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 589.8678ms" Feb 13 15:26:11.763946 containerd[1510]: time="2025-02-13T15:26:11.762748124Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:26:11.784463 containerd[1510]: time="2025-02-13T15:26:11.784407176Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:26:12.430994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367966622.mount: Deactivated successfully. Feb 13 15:26:15.947211 containerd[1510]: time="2025-02-13T15:26:15.947138407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:15.948901 containerd[1510]: time="2025-02-13T15:26:15.948568404Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Feb 13 15:26:15.949833 containerd[1510]: time="2025-02-13T15:26:15.949759109Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:15.953472 containerd[1510]: time="2025-02-13T15:26:15.953405935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:15.955388 containerd[1510]: time="2025-02-13T15:26:15.955246765Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.170793595s" Feb 13 15:26:15.955388 containerd[1510]: time="2025-02-13T15:26:15.955286401Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Feb 13 15:26:21.687078 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 15:26:21.698129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:21.818228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:21.820011 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:21.867937 kubelet[2305]: E0213 15:26:21.867188 2305 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:21.870253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:21.870542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:26:21.870857 systemd[1]: kubelet.service: Consumed 129ms CPU time, 96.5M memory peak. Feb 13 15:26:22.852629 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:22.852782 systemd[1]: kubelet.service: Consumed 129ms CPU time, 96.5M memory peak. Feb 13 15:26:22.861487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:22.885755 systemd[1]: Reload requested from client PID 2319 ('systemctl') (unit session-7.scope)... Feb 13 15:26:22.885780 systemd[1]: Reloading... Feb 13 15:26:23.021938 zram_generator::config[2367]: No configuration found. Feb 13 15:26:23.120658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:26:23.210860 systemd[1]: Reloading finished in 324 ms. Feb 13 15:26:23.265360 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:23.276788 (kubelet)[2403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:26:23.280650 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:23.281392 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:26:23.281728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:23.281781 systemd[1]: kubelet.service: Consumed 93ms CPU time, 83.4M memory peak. Feb 13 15:26:23.287254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:23.399812 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:23.405540 (kubelet)[2415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:26:23.454617 kubelet[2415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:26:23.454617 kubelet[2415]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:26:23.454617 kubelet[2415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:26:23.455021 kubelet[2415]: I0213 15:26:23.454717 2415 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:26:24.124753 kubelet[2415]: I0213 15:26:24.124684 2415 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:26:24.124753 kubelet[2415]: I0213 15:26:24.124729 2415 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:26:24.125126 kubelet[2415]: I0213 15:26:24.125047 2415 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:26:24.145592 kubelet[2415]: I0213 15:26:24.145327 2415 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:26:24.146079 kubelet[2415]: E0213 15:26:24.145991 2415 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://78.47.85.163:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:24.160391 kubelet[2415]: I0213 15:26:24.160317 2415 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:26:24.162002 kubelet[2415]: I0213 15:26:24.161925 2415 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:26:24.162302 kubelet[2415]: I0213 15:26:24.161986 2415 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-9-12db063e25","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:26:24.162424 kubelet[2415]: I0213 15:26:24.162360 2415 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:26:24.162424 kubelet[2415]: I0213 15:26:24.162373 2415 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:26:24.162631 kubelet[2415]: I0213 15:26:24.162596 2415 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:26:24.163993 kubelet[2415]: I0213 15:26:24.163940 2415 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:26:24.163993 kubelet[2415]: I0213 15:26:24.163980 2415 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:26:24.164222 kubelet[2415]: I0213 15:26:24.164149 2415 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:26:24.164281 kubelet[2415]: I0213 15:26:24.164233 2415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:26:24.167833 kubelet[2415]: W0213 15:26:24.167219 2415 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.85.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-9-12db063e25&limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:24.167833 kubelet[2415]: E0213 15:26:24.167304 2415 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.85.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-9-12db063e25&limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:24.167833 kubelet[2415]: W0213 15:26:24.167369 2415 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.85.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:24.167833 kubelet[2415]: E0213 15:26:24.167401 2415 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.85.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:24.169782 kubelet[2415]: I0213 15:26:24.168472 2415 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:26:24.169782 kubelet[2415]: I0213 15:26:24.168873 2415 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:26:24.169782 kubelet[2415]: W0213 15:26:24.169031 2415 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:26:24.171549 kubelet[2415]: I0213 15:26:24.171521 2415 server.go:1264] "Started kubelet" Feb 13 15:26:24.174575 kubelet[2415]: I0213 15:26:24.174545 2415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:26:24.177126 kubelet[2415]: I0213 15:26:24.177068 2415 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:26:24.178320 kubelet[2415]: I0213 15:26:24.178288 2415 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:26:24.179280 kubelet[2415]: I0213 15:26:24.179199 2415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:26:24.179478 kubelet[2415]: I0213 15:26:24.179456 2415 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:26:24.182825 kubelet[2415]: I0213 15:26:24.182799 2415 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:26:24.185068 kubelet[2415]: E0213 15:26:24.185010 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.85.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-9-12db063e25?timeout=10s\": dial tcp 78.47.85.163:6443: connect: connection refused" interval="200ms" Feb 13 15:26:24.185293 kubelet[2415]: E0213 15:26:24.185119 2415 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.85.163:6443/api/v1/namespaces/default/events\": dial tcp 78.47.85.163:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-1-9-12db063e25.1823ce00db68087e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-9-12db063e25,UID:ci-4230-0-1-9-12db063e25,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-9-12db063e25,},FirstTimestamp:2025-02-13 15:26:24.171493502 +0000 UTC m=+0.762516767,LastTimestamp:2025-02-13 15:26:24.171493502 +0000 UTC m=+0.762516767,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-9-12db063e25,}" Feb 13 15:26:24.185815 kubelet[2415]: I0213 15:26:24.185781 2415 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:26:24.186844 kubelet[2415]: I0213 15:26:24.186589 2415 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:26:24.188303 kubelet[2415]: W0213 15:26:24.188094 2415 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.85.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:24.188303 kubelet[2415]: E0213 15:26:24.188167 2415 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.85.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:24.190018 kubelet[2415]: E0213 15:26:24.189140 2415 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:26:24.190018 kubelet[2415]: I0213 15:26:24.189636 2415 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:26:24.190018 kubelet[2415]: I0213 15:26:24.189804 2415 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:26:24.190018 kubelet[2415]: I0213 15:26:24.189818 2415 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:26:24.198683 kubelet[2415]: I0213 15:26:24.198611 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:26:24.199936 kubelet[2415]: I0213 15:26:24.199879 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:26:24.200100 kubelet[2415]: I0213 15:26:24.200086 2415 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:26:24.200140 kubelet[2415]: I0213 15:26:24.200113 2415 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:26:24.200195 kubelet[2415]: E0213 15:26:24.200172 2415 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:26:24.208011 kubelet[2415]: W0213 15:26:24.207864 2415 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.85.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:24.208011 kubelet[2415]: E0213 15:26:24.207958 2415 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.85.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:24.220740 kubelet[2415]: I0213 15:26:24.220684 2415 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:26:24.220740 kubelet[2415]: I0213 15:26:24.220714 2415 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:26:24.220740 kubelet[2415]: I0213 15:26:24.220739 2415 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:26:24.222695 kubelet[2415]: I0213 15:26:24.222665 2415 policy_none.go:49] "None policy: Start" Feb 13 15:26:24.223959 kubelet[2415]: I0213 15:26:24.223490 2415 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:26:24.223959 kubelet[2415]: I0213 15:26:24.223609 2415 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:26:24.231818 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:26:24.247979 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:26:24.253127 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:26:24.264799 kubelet[2415]: I0213 15:26:24.264757 2415 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:26:24.267315 kubelet[2415]: I0213 15:26:24.265805 2415 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:26:24.267315 kubelet[2415]: I0213 15:26:24.266184 2415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:26:24.271265 kubelet[2415]: E0213 15:26:24.271214 2415 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-1-9-12db063e25\" not found" Feb 13 15:26:24.285697 kubelet[2415]: I0213 15:26:24.285660 2415 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.286986 kubelet[2415]: E0213 15:26:24.286879 2415 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.85.163:6443/api/v1/nodes\": dial tcp 78.47.85.163:6443: connect: connection refused" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.301385 kubelet[2415]: I0213 15:26:24.301295 2415 topology_manager.go:215] "Topology Admit Handler" podUID="65a9b4d34c640138932439d9da319833" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.305936 kubelet[2415]: I0213 15:26:24.305292 2415 topology_manager.go:215] "Topology Admit Handler" podUID="2772845ebaee306c9cbf326d1db0d0df" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.307399 kubelet[2415]: I0213 15:26:24.307369 2415 topology_manager.go:215] "Topology Admit Handler" podUID="9544dc644f39cd5a525bc72071012364" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.313966 systemd[1]: Created slice kubepods-burstable-pod65a9b4d34c640138932439d9da319833.slice - libcontainer container kubepods-burstable-pod65a9b4d34c640138932439d9da319833.slice. Feb 13 15:26:24.332614 systemd[1]: Created slice kubepods-burstable-pod2772845ebaee306c9cbf326d1db0d0df.slice - libcontainer container kubepods-burstable-pod2772845ebaee306c9cbf326d1db0d0df.slice. Feb 13 15:26:24.350730 systemd[1]: Created slice kubepods-burstable-pod9544dc644f39cd5a525bc72071012364.slice - libcontainer container kubepods-burstable-pod9544dc644f39cd5a525bc72071012364.slice. Feb 13 15:26:24.386529 kubelet[2415]: E0213 15:26:24.386329 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.85.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-9-12db063e25?timeout=10s\": dial tcp 78.47.85.163:6443: connect: connection refused" interval="400ms" Feb 13 15:26:24.390431 kubelet[2415]: I0213 15:26:24.390180 2415 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65a9b4d34c640138932439d9da319833-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-9-12db063e25\" (UID: \"65a9b4d34c640138932439d9da319833\") " pod="kube-system/kube-apiserver-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.390431 kubelet[2415]: I0213 15:26:24.390220 2415 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65a9b4d34c640138932439d9da319833-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-9-12db063e25\" (UID: \"65a9b4d34c640138932439d9da319833\") " pod="kube-system/kube-apiserver-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.390431 kubelet[2415]: I0213 15:26:24.390243 2415 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2772845ebaee306c9cbf326d1db0d0df-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-9-12db063e25\" (UID: \"2772845ebaee306c9cbf326d1db0d0df\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.390431 kubelet[2415]: I0213 15:26:24.390259 2415 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2772845ebaee306c9cbf326d1db0d0df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-9-12db063e25\" (UID: \"2772845ebaee306c9cbf326d1db0d0df\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.390431 kubelet[2415]: I0213 15:26:24.390290 2415 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9544dc644f39cd5a525bc72071012364-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-9-12db063e25\" (UID: \"9544dc644f39cd5a525bc72071012364\") " pod="kube-system/kube-scheduler-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.390640 kubelet[2415]: I0213 15:26:24.390305 2415 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65a9b4d34c640138932439d9da319833-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-9-12db063e25\" (UID: \"65a9b4d34c640138932439d9da319833\") " pod="kube-system/kube-apiserver-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.390640 kubelet[2415]: I0213 15:26:24.390321 2415 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2772845ebaee306c9cbf326d1db0d0df-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-9-12db063e25\" (UID: \"2772845ebaee306c9cbf326d1db0d0df\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.390640 kubelet[2415]: I0213 15:26:24.390336 2415 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2772845ebaee306c9cbf326d1db0d0df-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-9-12db063e25\" (UID: \"2772845ebaee306c9cbf326d1db0d0df\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.390640 kubelet[2415]: I0213 15:26:24.390351 2415 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2772845ebaee306c9cbf326d1db0d0df-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-9-12db063e25\" (UID: \"2772845ebaee306c9cbf326d1db0d0df\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.491715 kubelet[2415]: I0213 15:26:24.491630 2415 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.492870 kubelet[2415]: E0213 15:26:24.492286 2415 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.85.163:6443/api/v1/nodes\": dial tcp 78.47.85.163:6443: connect: connection refused" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.628664 containerd[1510]: time="2025-02-13T15:26:24.628551919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-9-12db063e25,Uid:65a9b4d34c640138932439d9da319833,Namespace:kube-system,Attempt:0,}" Feb 13 15:26:24.648945 containerd[1510]: time="2025-02-13T15:26:24.648171872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-9-12db063e25,Uid:2772845ebaee306c9cbf326d1db0d0df,Namespace:kube-system,Attempt:0,}" Feb 13 15:26:24.655263 containerd[1510]: time="2025-02-13T15:26:24.655189906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-9-12db063e25,Uid:9544dc644f39cd5a525bc72071012364,Namespace:kube-system,Attempt:0,}" Feb 13 15:26:24.788163 kubelet[2415]: E0213 15:26:24.787938 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.85.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-9-12db063e25?timeout=10s\": dial tcp 78.47.85.163:6443: connect: connection refused" interval="800ms" Feb 13 15:26:24.896858 kubelet[2415]: I0213 15:26:24.896514 2415 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:24.897241 kubelet[2415]: E0213 15:26:24.897212 2415 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.85.163:6443/api/v1/nodes\": dial tcp 78.47.85.163:6443: connect: connection refused" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:25.024022 kubelet[2415]: W0213 15:26:25.023541 2415 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.85.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:25.024022 kubelet[2415]: E0213 15:26:25.023607 2415 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.85.163:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:25.038102 kubelet[2415]: W0213 15:26:25.037955 2415 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.85.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:25.038102 kubelet[2415]: E0213 15:26:25.038057 2415 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.85.163:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:25.178568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260935390.mount: Deactivated successfully. Feb 13 15:26:25.185342 containerd[1510]: time="2025-02-13T15:26:25.185285543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:26:25.187303 containerd[1510]: time="2025-02-13T15:26:25.187253906Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:26:25.189143 containerd[1510]: time="2025-02-13T15:26:25.188986522Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 15:26:25.189244 containerd[1510]: time="2025-02-13T15:26:25.189185671Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:26:25.191561 containerd[1510]: time="2025-02-13T15:26:25.191493653Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:26:25.193933 containerd[1510]: time="2025-02-13T15:26:25.193405699Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:26:25.193933 containerd[1510]: time="2025-02-13T15:26:25.193595448Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:26:25.197492 containerd[1510]: time="2025-02-13T15:26:25.197385622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:26:25.199187 containerd[1510]: time="2025-02-13T15:26:25.198935609Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.239219ms" Feb 13 15:26:25.200201 containerd[1510]: time="2025-02-13T15:26:25.200146857Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.868232ms" Feb 13 15:26:25.201723 containerd[1510]: time="2025-02-13T15:26:25.201677206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.385307ms" Feb 13 15:26:25.325806 containerd[1510]: time="2025-02-13T15:26:25.324902980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:26:25.325806 containerd[1510]: time="2025-02-13T15:26:25.325217242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:26:25.325806 containerd[1510]: time="2025-02-13T15:26:25.325319275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:25.326614 containerd[1510]: time="2025-02-13T15:26:25.326541723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:25.334443 containerd[1510]: time="2025-02-13T15:26:25.334338058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:26:25.334595 containerd[1510]: time="2025-02-13T15:26:25.334559805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:26:25.335224 containerd[1510]: time="2025-02-13T15:26:25.335093613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:25.337148 containerd[1510]: time="2025-02-13T15:26:25.336610002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:25.337148 containerd[1510]: time="2025-02-13T15:26:25.336747074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:26:25.337148 containerd[1510]: time="2025-02-13T15:26:25.336800711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:26:25.337148 containerd[1510]: time="2025-02-13T15:26:25.336817110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:25.338612 containerd[1510]: time="2025-02-13T15:26:25.337461312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:25.360210 systemd[1]: Started cri-containerd-abb0b246e95f4fcbdb2bf5cd8912cdf53c9180547789e82c8514849a489407d5.scope - libcontainer container abb0b246e95f4fcbdb2bf5cd8912cdf53c9180547789e82c8514849a489407d5. Feb 13 15:26:25.370159 systemd[1]: Started cri-containerd-b31ef8490d70e47cf3b78246df2f7cfdfa72c8b04500599810c7e530cf8ea92e.scope - libcontainer container b31ef8490d70e47cf3b78246df2f7cfdfa72c8b04500599810c7e530cf8ea92e. Feb 13 15:26:25.391242 systemd[1]: Started cri-containerd-bb676302a136ce8f214b93f480deb5259dab5c2c93b3a3d21185cc715612e8fb.scope - libcontainer container bb676302a136ce8f214b93f480deb5259dab5c2c93b3a3d21185cc715612e8fb. Feb 13 15:26:25.438807 containerd[1510]: time="2025-02-13T15:26:25.438751794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-9-12db063e25,Uid:9544dc644f39cd5a525bc72071012364,Namespace:kube-system,Attempt:0,} returns sandbox id \"abb0b246e95f4fcbdb2bf5cd8912cdf53c9180547789e82c8514849a489407d5\"" Feb 13 15:26:25.442755 containerd[1510]: time="2025-02-13T15:26:25.442701838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-9-12db063e25,Uid:65a9b4d34c640138932439d9da319833,Namespace:kube-system,Attempt:0,} returns sandbox id \"b31ef8490d70e47cf3b78246df2f7cfdfa72c8b04500599810c7e530cf8ea92e\"" Feb 13 15:26:25.446596 containerd[1510]: time="2025-02-13T15:26:25.446241707Z" level=info msg="CreateContainer within sandbox \"abb0b246e95f4fcbdb2bf5cd8912cdf53c9180547789e82c8514849a489407d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:26:25.449340 containerd[1510]: time="2025-02-13T15:26:25.449296565Z" level=info msg="CreateContainer within sandbox \"b31ef8490d70e47cf3b78246df2f7cfdfa72c8b04500599810c7e530cf8ea92e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:26:25.458249 kubelet[2415]: W0213 15:26:25.458064 2415 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.85.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-9-12db063e25&limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:25.458249 kubelet[2415]: E0213 15:26:25.458151 2415 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.85.163:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-9-12db063e25&limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:25.465871 containerd[1510]: time="2025-02-13T15:26:25.465821980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-9-12db063e25,Uid:2772845ebaee306c9cbf326d1db0d0df,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb676302a136ce8f214b93f480deb5259dab5c2c93b3a3d21185cc715612e8fb\"" Feb 13 15:26:25.470964 containerd[1510]: time="2025-02-13T15:26:25.470870159Z" level=info msg="CreateContainer within sandbox \"bb676302a136ce8f214b93f480deb5259dab5c2c93b3a3d21185cc715612e8fb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:26:25.476122 containerd[1510]: time="2025-02-13T15:26:25.476044931Z" level=info msg="CreateContainer within sandbox \"abb0b246e95f4fcbdb2bf5cd8912cdf53c9180547789e82c8514849a489407d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"893d789abb3ceea3637665f8aca4832d1025d15f4e7385419641c7f564cafe24\"" Feb 13 15:26:25.476721 containerd[1510]: time="2025-02-13T15:26:25.476244359Z" level=info msg="CreateContainer within sandbox \"b31ef8490d70e47cf3b78246df2f7cfdfa72c8b04500599810c7e530cf8ea92e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7f06735c10fc750f82f6344b49e779be19d8684d498aeeaad3ab73df33866889\"" Feb 13 15:26:25.478057 containerd[1510]: time="2025-02-13T15:26:25.477033792Z" level=info msg="StartContainer for \"893d789abb3ceea3637665f8aca4832d1025d15f4e7385419641c7f564cafe24\"" Feb 13 15:26:25.478401 containerd[1510]: time="2025-02-13T15:26:25.478371592Z" level=info msg="StartContainer for \"7f06735c10fc750f82f6344b49e779be19d8684d498aeeaad3ab73df33866889\"" Feb 13 15:26:25.501531 containerd[1510]: time="2025-02-13T15:26:25.501481774Z" level=info msg="CreateContainer within sandbox \"bb676302a136ce8f214b93f480deb5259dab5c2c93b3a3d21185cc715612e8fb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b017ae1cfa627c08dac48e1d6b1364c0a8fc6fe8b706ad8a32faf617b85dcd9\"" Feb 13 15:26:25.503237 containerd[1510]: time="2025-02-13T15:26:25.503196512Z" level=info msg="StartContainer for \"5b017ae1cfa627c08dac48e1d6b1364c0a8fc6fe8b706ad8a32faf617b85dcd9\"" Feb 13 15:26:25.516182 systemd[1]: Started cri-containerd-7f06735c10fc750f82f6344b49e779be19d8684d498aeeaad3ab73df33866889.scope - libcontainer container 7f06735c10fc750f82f6344b49e779be19d8684d498aeeaad3ab73df33866889. Feb 13 15:26:25.517436 systemd[1]: Started cri-containerd-893d789abb3ceea3637665f8aca4832d1025d15f4e7385419641c7f564cafe24.scope - libcontainer container 893d789abb3ceea3637665f8aca4832d1025d15f4e7385419641c7f564cafe24. Feb 13 15:26:25.558334 systemd[1]: Started cri-containerd-5b017ae1cfa627c08dac48e1d6b1364c0a8fc6fe8b706ad8a32faf617b85dcd9.scope - libcontainer container 5b017ae1cfa627c08dac48e1d6b1364c0a8fc6fe8b706ad8a32faf617b85dcd9. Feb 13 15:26:25.587958 containerd[1510]: time="2025-02-13T15:26:25.586677016Z" level=info msg="StartContainer for \"7f06735c10fc750f82f6344b49e779be19d8684d498aeeaad3ab73df33866889\" returns successfully" Feb 13 15:26:25.592091 kubelet[2415]: E0213 15:26:25.589403 2415 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.85.163:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-9-12db063e25?timeout=10s\": dial tcp 78.47.85.163:6443: connect: connection refused" interval="1.6s" Feb 13 15:26:25.601636 containerd[1510]: time="2025-02-13T15:26:25.601590167Z" level=info msg="StartContainer for \"893d789abb3ceea3637665f8aca4832d1025d15f4e7385419641c7f564cafe24\" returns successfully" Feb 13 15:26:25.621652 containerd[1510]: time="2025-02-13T15:26:25.621603374Z" level=info msg="StartContainer for \"5b017ae1cfa627c08dac48e1d6b1364c0a8fc6fe8b706ad8a32faf617b85dcd9\" returns successfully" Feb 13 15:26:25.623701 kubelet[2415]: W0213 15:26:25.623640 2415 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.85.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:25.623847 kubelet[2415]: E0213 15:26:25.623828 2415 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.85.163:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.85.163:6443: connect: connection refused Feb 13 15:26:25.700885 kubelet[2415]: I0213 15:26:25.700519 2415 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:25.701210 kubelet[2415]: E0213 15:26:25.701178 2415 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.85.163:6443/api/v1/nodes\": dial tcp 78.47.85.163:6443: connect: connection refused" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:27.308848 kubelet[2415]: I0213 15:26:27.306769 2415 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:27.780936 kubelet[2415]: I0213 15:26:27.779419 2415 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:28.170705 kubelet[2415]: I0213 15:26:28.170330 2415 apiserver.go:52] "Watching apiserver" Feb 13 15:26:28.187003 kubelet[2415]: I0213 15:26:28.186930 2415 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:26:29.849171 systemd[1]: Reload requested from client PID 2697 ('systemctl') (unit session-7.scope)... Feb 13 15:26:29.849518 systemd[1]: Reloading... Feb 13 15:26:29.946950 zram_generator::config[2748]: No configuration found. Feb 13 15:26:30.046307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:26:30.153013 systemd[1]: Reloading finished in 303 ms. Feb 13 15:26:30.183852 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:30.192760 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:26:30.193275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:30.193369 systemd[1]: kubelet.service: Consumed 1.162s CPU time, 112.2M memory peak. Feb 13 15:26:30.202565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:30.314565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:30.325412 (kubelet)[2787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:26:30.388107 kubelet[2787]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:26:30.388107 kubelet[2787]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:26:30.388107 kubelet[2787]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:26:30.388107 kubelet[2787]: I0213 15:26:30.386349 2787 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:26:30.391502 kubelet[2787]: I0213 15:26:30.391471 2787 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:26:30.392158 kubelet[2787]: I0213 15:26:30.391687 2787 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:26:30.392158 kubelet[2787]: I0213 15:26:30.391986 2787 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:26:30.395432 kubelet[2787]: I0213 15:26:30.395404 2787 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:26:30.397108 kubelet[2787]: I0213 15:26:30.397069 2787 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:26:30.405035 kubelet[2787]: I0213 15:26:30.404446 2787 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:26:30.405553 kubelet[2787]: I0213 15:26:30.405515 2787 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:26:30.406620 kubelet[2787]: I0213 15:26:30.405637 2787 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-9-12db063e25","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:26:30.406620 kubelet[2787]: I0213 15:26:30.405843 2787 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:26:30.406620 kubelet[2787]: I0213 15:26:30.405936 2787 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:26:30.406620 kubelet[2787]: I0213 15:26:30.405979 2787 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:26:30.406620 kubelet[2787]: I0213 15:26:30.406088 2787 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:26:30.406868 kubelet[2787]: I0213 15:26:30.406100 2787 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:26:30.406868 kubelet[2787]: I0213 15:26:30.406126 2787 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:26:30.406868 kubelet[2787]: I0213 15:26:30.406140 2787 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:26:30.407380 kubelet[2787]: I0213 15:26:30.407357 2787 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:26:30.407583 kubelet[2787]: I0213 15:26:30.407561 2787 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:26:30.408145 kubelet[2787]: I0213 15:26:30.408081 2787 server.go:1264] "Started kubelet" Feb 13 15:26:30.410713 kubelet[2787]: I0213 15:26:30.410675 2787 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:26:30.416736 kubelet[2787]: I0213 15:26:30.416305 2787 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:26:30.418420 kubelet[2787]: I0213 15:26:30.417461 2787 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:26:30.418692 kubelet[2787]: I0213 15:26:30.418610 2787 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:26:30.419979 kubelet[2787]: I0213 15:26:30.419034 2787 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:26:30.421536 kubelet[2787]: I0213 15:26:30.421004 2787 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:26:30.423166 kubelet[2787]: I0213 15:26:30.423113 2787 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:26:30.425204 kubelet[2787]: I0213 15:26:30.423271 2787 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:26:30.425204 kubelet[2787]: I0213 15:26:30.423784 2787 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:26:30.425204 kubelet[2787]: I0213 15:26:30.423890 2787 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:26:30.426023 kubelet[2787]: I0213 15:26:30.425990 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:26:30.430066 kubelet[2787]: I0213 15:26:30.430025 2787 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:26:30.430151 kubelet[2787]: I0213 15:26:30.430076 2787 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:26:30.430151 kubelet[2787]: I0213 15:26:30.430097 2787 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:26:30.430193 kubelet[2787]: E0213 15:26:30.430141 2787 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:26:30.447767 kubelet[2787]: I0213 15:26:30.447729 2787 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:26:30.518374 kubelet[2787]: I0213 15:26:30.518339 2787 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:26:30.518374 kubelet[2787]: I0213 15:26:30.518357 2787 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:26:30.518553 kubelet[2787]: I0213 15:26:30.518455 2787 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:26:30.518731 kubelet[2787]: I0213 15:26:30.518709 2787 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:26:30.518781 kubelet[2787]: I0213 15:26:30.518727 2787 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:26:30.518781 kubelet[2787]: I0213 15:26:30.518749 2787 policy_none.go:49] "None policy: Start" Feb 13 15:26:30.519680 kubelet[2787]: I0213 15:26:30.519637 2787 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:26:30.519680 kubelet[2787]: I0213 15:26:30.519681 2787 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:26:30.519890 kubelet[2787]: I0213 15:26:30.519866 2787 state_mem.go:75] "Updated machine memory state" Feb 13 15:26:30.526642 kubelet[2787]: I0213 15:26:30.526587 2787 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.529799 kubelet[2787]: I0213 15:26:30.529749 2787 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:26:30.532401 kubelet[2787]: I0213 15:26:30.532288 2787 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:26:30.532401 kubelet[2787]: I0213 15:26:30.532389 2787 topology_manager.go:215] "Topology Admit Handler" podUID="65a9b4d34c640138932439d9da319833" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.532587 kubelet[2787]: I0213 15:26:30.532471 2787 topology_manager.go:215] "Topology Admit Handler" podUID="2772845ebaee306c9cbf326d1db0d0df" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.532587 kubelet[2787]: I0213 15:26:30.532511 2787 topology_manager.go:215] "Topology Admit Handler" podUID="9544dc644f39cd5a525bc72071012364" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.533353 kubelet[2787]: I0213 15:26:30.533309 2787 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:26:30.544194 kubelet[2787]: I0213 15:26:30.544164 2787 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.544312 kubelet[2787]: I0213 15:26:30.544248 2787 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.545581 kubelet[2787]: E0213 15:26:30.545544 2787 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4230-0-1-9-12db063e25\" already exists" pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.725984 kubelet[2787]: I0213 15:26:30.725901 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9544dc644f39cd5a525bc72071012364-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-9-12db063e25\" (UID: \"9544dc644f39cd5a525bc72071012364\") " pod="kube-system/kube-scheduler-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.725984 kubelet[2787]: I0213 15:26:30.725995 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65a9b4d34c640138932439d9da319833-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-9-12db063e25\" (UID: \"65a9b4d34c640138932439d9da319833\") " pod="kube-system/kube-apiserver-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.726238 kubelet[2787]: I0213 15:26:30.726034 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65a9b4d34c640138932439d9da319833-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-9-12db063e25\" (UID: \"65a9b4d34c640138932439d9da319833\") " pod="kube-system/kube-apiserver-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.726238 kubelet[2787]: I0213 15:26:30.726068 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65a9b4d34c640138932439d9da319833-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-9-12db063e25\" (UID: \"65a9b4d34c640138932439d9da319833\") " pod="kube-system/kube-apiserver-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.726238 kubelet[2787]: I0213 15:26:30.726130 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2772845ebaee306c9cbf326d1db0d0df-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-9-12db063e25\" (UID: \"2772845ebaee306c9cbf326d1db0d0df\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.726238 kubelet[2787]: I0213 15:26:30.726168 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2772845ebaee306c9cbf326d1db0d0df-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-9-12db063e25\" (UID: \"2772845ebaee306c9cbf326d1db0d0df\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.726238 kubelet[2787]: I0213 15:26:30.726200 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2772845ebaee306c9cbf326d1db0d0df-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-9-12db063e25\" (UID: \"2772845ebaee306c9cbf326d1db0d0df\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.726477 kubelet[2787]: I0213 15:26:30.726232 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2772845ebaee306c9cbf326d1db0d0df-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-9-12db063e25\" (UID: \"2772845ebaee306c9cbf326d1db0d0df\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:30.726477 kubelet[2787]: I0213 15:26:30.726269 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2772845ebaee306c9cbf326d1db0d0df-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-9-12db063e25\" (UID: \"2772845ebaee306c9cbf326d1db0d0df\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" Feb 13 15:26:31.407670 kubelet[2787]: I0213 15:26:31.407626 2787 apiserver.go:52] "Watching apiserver" Feb 13 15:26:31.423475 kubelet[2787]: I0213 15:26:31.423348 2787 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:26:31.528441 kubelet[2787]: E0213 15:26:31.527887 2787 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4230-0-1-9-12db063e25\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-1-9-12db063e25" Feb 13 15:26:31.626726 kubelet[2787]: I0213 15:26:31.626449 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-1-9-12db063e25" podStartSLOduration=1.6264305559999999 podStartE2EDuration="1.626430556s" podCreationTimestamp="2025-02-13 15:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:31.599537085 +0000 UTC m=+1.269668526" watchObservedRunningTime="2025-02-13 15:26:31.626430556 +0000 UTC m=+1.296561997" Feb 13 15:26:31.652534 kubelet[2787]: I0213 15:26:31.652477 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-1-9-12db063e25" podStartSLOduration=1.652457143 podStartE2EDuration="1.652457143s" podCreationTimestamp="2025-02-13 15:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:31.629676265 +0000 UTC m=+1.299807746" watchObservedRunningTime="2025-02-13 15:26:31.652457143 +0000 UTC m=+1.322588584" Feb 13 15:26:35.603423 kubelet[2787]: I0213 15:26:35.603098 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-1-9-12db063e25" podStartSLOduration=5.603077852 podStartE2EDuration="5.603077852s" podCreationTimestamp="2025-02-13 15:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:31.653221432 +0000 UTC m=+1.323352833" watchObservedRunningTime="2025-02-13 15:26:35.603077852 +0000 UTC m=+5.273209293" Feb 13 15:26:35.793984 sudo[1808]: pam_unix(sudo:session): session closed for user root Feb 13 15:26:35.952985 sshd[1807]: Connection closed by 139.178.68.195 port 53922 Feb 13 15:26:35.954189 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:35.959088 systemd[1]: sshd@6-78.47.85.163:22-139.178.68.195:53922.service: Deactivated successfully. Feb 13 15:26:35.961666 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:26:35.961955 systemd[1]: session-7.scope: Consumed 8.720s CPU time, 260.1M memory peak. Feb 13 15:26:35.962956 systemd-logind[1489]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:26:35.964334 systemd-logind[1489]: Removed session 7. Feb 13 15:26:44.637072 kubelet[2787]: I0213 15:26:44.636888 2787 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:26:44.637902 containerd[1510]: time="2025-02-13T15:26:44.637856443Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:26:44.638664 kubelet[2787]: I0213 15:26:44.638187 2787 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:26:45.243196 kubelet[2787]: I0213 15:26:45.243139 2787 topology_manager.go:215] "Topology Admit Handler" podUID="359d587f-955a-4722-9d5a-4819c8d98214" podNamespace="kube-system" podName="kube-proxy-96qjh" Feb 13 15:26:45.255582 systemd[1]: Created slice kubepods-besteffort-pod359d587f_955a_4722_9d5a_4819c8d98214.slice - libcontainer container kubepods-besteffort-pod359d587f_955a_4722_9d5a_4819c8d98214.slice. Feb 13 15:26:45.326091 kubelet[2787]: I0213 15:26:45.325987 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/359d587f-955a-4722-9d5a-4819c8d98214-lib-modules\") pod \"kube-proxy-96qjh\" (UID: \"359d587f-955a-4722-9d5a-4819c8d98214\") " pod="kube-system/kube-proxy-96qjh" Feb 13 15:26:45.326091 kubelet[2787]: I0213 15:26:45.326055 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx9wl\" (UniqueName: \"kubernetes.io/projected/359d587f-955a-4722-9d5a-4819c8d98214-kube-api-access-tx9wl\") pod \"kube-proxy-96qjh\" (UID: \"359d587f-955a-4722-9d5a-4819c8d98214\") " pod="kube-system/kube-proxy-96qjh" Feb 13 15:26:45.326091 kubelet[2787]: I0213 15:26:45.326103 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/359d587f-955a-4722-9d5a-4819c8d98214-xtables-lock\") pod \"kube-proxy-96qjh\" (UID: \"359d587f-955a-4722-9d5a-4819c8d98214\") " pod="kube-system/kube-proxy-96qjh" Feb 13 15:26:45.326526 kubelet[2787]: I0213 15:26:45.326128 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/359d587f-955a-4722-9d5a-4819c8d98214-kube-proxy\") pod \"kube-proxy-96qjh\" (UID: \"359d587f-955a-4722-9d5a-4819c8d98214\") " pod="kube-system/kube-proxy-96qjh" Feb 13 15:26:45.566482 containerd[1510]: time="2025-02-13T15:26:45.566004402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-96qjh,Uid:359d587f-955a-4722-9d5a-4819c8d98214,Namespace:kube-system,Attempt:0,}" Feb 13 15:26:45.596509 containerd[1510]: time="2025-02-13T15:26:45.596228664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:26:45.596509 containerd[1510]: time="2025-02-13T15:26:45.596305266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:26:45.596509 containerd[1510]: time="2025-02-13T15:26:45.596348707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:45.596509 containerd[1510]: time="2025-02-13T15:26:45.596446989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:45.627614 systemd[1]: Started cri-containerd-5a8271c7d0a2832080ce82518450d9f898bad99b54298867849eadad4d9d7c7d.scope - libcontainer container 5a8271c7d0a2832080ce82518450d9f898bad99b54298867849eadad4d9d7c7d. Feb 13 15:26:45.659953 containerd[1510]: time="2025-02-13T15:26:45.659663134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-96qjh,Uid:359d587f-955a-4722-9d5a-4819c8d98214,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a8271c7d0a2832080ce82518450d9f898bad99b54298867849eadad4d9d7c7d\"" Feb 13 15:26:45.666022 containerd[1510]: time="2025-02-13T15:26:45.665962793Z" level=info msg="CreateContainer within sandbox \"5a8271c7d0a2832080ce82518450d9f898bad99b54298867849eadad4d9d7c7d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:26:45.697707 containerd[1510]: time="2025-02-13T15:26:45.697575645Z" level=info msg="CreateContainer within sandbox \"5a8271c7d0a2832080ce82518450d9f898bad99b54298867849eadad4d9d7c7d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a4bbe02119477608655c5930e166e867c69105c12387f1f0c4b1d02d77c01dd1\"" Feb 13 15:26:45.698652 containerd[1510]: time="2025-02-13T15:26:45.698367943Z" level=info msg="StartContainer for \"a4bbe02119477608655c5930e166e867c69105c12387f1f0c4b1d02d77c01dd1\"" Feb 13 15:26:45.705922 kubelet[2787]: I0213 15:26:45.705335 2787 topology_manager.go:215] "Topology Admit Handler" podUID="d99c2f73-3064-46b8-a080-a71266b43d2d" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-ckgfq" Feb 13 15:26:45.714533 systemd[1]: Created slice kubepods-besteffort-podd99c2f73_3064_46b8_a080_a71266b43d2d.slice - libcontainer container kubepods-besteffort-podd99c2f73_3064_46b8_a080_a71266b43d2d.slice. Feb 13 15:26:45.729269 kubelet[2787]: I0213 15:26:45.729140 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d99c2f73-3064-46b8-a080-a71266b43d2d-var-lib-calico\") pod \"tigera-operator-7bc55997bb-ckgfq\" (UID: \"d99c2f73-3064-46b8-a080-a71266b43d2d\") " pod="tigera-operator/tigera-operator-7bc55997bb-ckgfq" Feb 13 15:26:45.729269 kubelet[2787]: I0213 15:26:45.729193 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bj7j\" (UniqueName: \"kubernetes.io/projected/d99c2f73-3064-46b8-a080-a71266b43d2d-kube-api-access-5bj7j\") pod \"tigera-operator-7bc55997bb-ckgfq\" (UID: \"d99c2f73-3064-46b8-a080-a71266b43d2d\") " pod="tigera-operator/tigera-operator-7bc55997bb-ckgfq" Feb 13 15:26:45.747222 systemd[1]: Started cri-containerd-a4bbe02119477608655c5930e166e867c69105c12387f1f0c4b1d02d77c01dd1.scope - libcontainer container a4bbe02119477608655c5930e166e867c69105c12387f1f0c4b1d02d77c01dd1. Feb 13 15:26:45.784522 containerd[1510]: time="2025-02-13T15:26:45.784002339Z" level=info msg="StartContainer for \"a4bbe02119477608655c5930e166e867c69105c12387f1f0c4b1d02d77c01dd1\" returns successfully" Feb 13 15:26:46.022294 containerd[1510]: time="2025-02-13T15:26:46.022252988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-ckgfq,Uid:d99c2f73-3064-46b8-a080-a71266b43d2d,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:26:46.051491 containerd[1510]: time="2025-02-13T15:26:46.050586273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:26:46.051491 containerd[1510]: time="2025-02-13T15:26:46.051432571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:26:46.051491 containerd[1510]: time="2025-02-13T15:26:46.051461411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:46.052748 containerd[1510]: time="2025-02-13T15:26:46.051565614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:46.076131 systemd[1]: Started cri-containerd-81ccd3a44ae3d264b22b606159e7de82d0fd5138411073d371595ccbd8e0245d.scope - libcontainer container 81ccd3a44ae3d264b22b606159e7de82d0fd5138411073d371595ccbd8e0245d. Feb 13 15:26:46.122297 containerd[1510]: time="2025-02-13T15:26:46.122253122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-ckgfq,Uid:d99c2f73-3064-46b8-a080-a71266b43d2d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"81ccd3a44ae3d264b22b606159e7de82d0fd5138411073d371595ccbd8e0245d\"" Feb 13 15:26:46.127662 containerd[1510]: time="2025-02-13T15:26:46.126900461Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:26:46.458865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount357171370.mount: Deactivated successfully. Feb 13 15:26:48.554443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029435596.mount: Deactivated successfully. Feb 13 15:26:49.030110 containerd[1510]: time="2025-02-13T15:26:49.030050795Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:49.031938 containerd[1510]: time="2025-02-13T15:26:49.031699307Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 15:26:49.033708 containerd[1510]: time="2025-02-13T15:26:49.033640146Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:49.036739 containerd[1510]: time="2025-02-13T15:26:49.036416320Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:49.037976 containerd[1510]: time="2025-02-13T15:26:49.037592903Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.909927946s" Feb 13 15:26:49.037976 containerd[1510]: time="2025-02-13T15:26:49.037626384Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 15:26:49.040739 containerd[1510]: time="2025-02-13T15:26:49.040578202Z" level=info msg="CreateContainer within sandbox \"81ccd3a44ae3d264b22b606159e7de82d0fd5138411073d371595ccbd8e0245d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:26:49.056378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067188810.mount: Deactivated successfully. Feb 13 15:26:49.058852 containerd[1510]: time="2025-02-13T15:26:49.058797401Z" level=info msg="CreateContainer within sandbox \"81ccd3a44ae3d264b22b606159e7de82d0fd5138411073d371595ccbd8e0245d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b5263be874fc611e3e819b29284a2de0a770375d7e61008cc26a8ae94cd975bf\"" Feb 13 15:26:49.060884 containerd[1510]: time="2025-02-13T15:26:49.059378813Z" level=info msg="StartContainer for \"b5263be874fc611e3e819b29284a2de0a770375d7e61008cc26a8ae94cd975bf\"" Feb 13 15:26:49.089134 systemd[1]: Started cri-containerd-b5263be874fc611e3e819b29284a2de0a770375d7e61008cc26a8ae94cd975bf.scope - libcontainer container b5263be874fc611e3e819b29284a2de0a770375d7e61008cc26a8ae94cd975bf. Feb 13 15:26:49.120102 containerd[1510]: time="2025-02-13T15:26:49.120047527Z" level=info msg="StartContainer for \"b5263be874fc611e3e819b29284a2de0a770375d7e61008cc26a8ae94cd975bf\" returns successfully" Feb 13 15:26:49.552412 kubelet[2787]: I0213 15:26:49.552322 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-96qjh" podStartSLOduration=4.552290201 podStartE2EDuration="4.552290201s" podCreationTimestamp="2025-02-13 15:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:46.543181302 +0000 UTC m=+16.213312783" watchObservedRunningTime="2025-02-13 15:26:49.552290201 +0000 UTC m=+19.222421682" Feb 13 15:26:50.448818 kubelet[2787]: I0213 15:26:50.448555 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-ckgfq" podStartSLOduration=2.53585158 podStartE2EDuration="5.448529903s" podCreationTimestamp="2025-02-13 15:26:45 +0000 UTC" firstStartedPulling="2025-02-13 15:26:46.126101084 +0000 UTC m=+15.796232525" lastFinishedPulling="2025-02-13 15:26:49.038779407 +0000 UTC m=+18.708910848" observedRunningTime="2025-02-13 15:26:49.553576746 +0000 UTC m=+19.223708187" watchObservedRunningTime="2025-02-13 15:26:50.448529903 +0000 UTC m=+20.118661384" Feb 13 15:26:52.788460 kubelet[2787]: I0213 15:26:52.788234 2787 topology_manager.go:215] "Topology Admit Handler" podUID="351ce921-f934-4762-b93f-05f8b9af8d84" podNamespace="calico-system" podName="calico-typha-76b4c6d959-5krt9" Feb 13 15:26:52.797928 systemd[1]: Created slice kubepods-besteffort-pod351ce921_f934_4762_b93f_05f8b9af8d84.slice - libcontainer container kubepods-besteffort-pod351ce921_f934_4762_b93f_05f8b9af8d84.slice. Feb 13 15:26:52.874992 kubelet[2787]: I0213 15:26:52.874931 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/351ce921-f934-4762-b93f-05f8b9af8d84-typha-certs\") pod \"calico-typha-76b4c6d959-5krt9\" (UID: \"351ce921-f934-4762-b93f-05f8b9af8d84\") " pod="calico-system/calico-typha-76b4c6d959-5krt9" Feb 13 15:26:52.875206 kubelet[2787]: I0213 15:26:52.875189 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/351ce921-f934-4762-b93f-05f8b9af8d84-tigera-ca-bundle\") pod \"calico-typha-76b4c6d959-5krt9\" (UID: \"351ce921-f934-4762-b93f-05f8b9af8d84\") " pod="calico-system/calico-typha-76b4c6d959-5krt9" Feb 13 15:26:52.875346 kubelet[2787]: I0213 15:26:52.875262 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfgg6\" (UniqueName: \"kubernetes.io/projected/351ce921-f934-4762-b93f-05f8b9af8d84-kube-api-access-qfgg6\") pod \"calico-typha-76b4c6d959-5krt9\" (UID: \"351ce921-f934-4762-b93f-05f8b9af8d84\") " pod="calico-system/calico-typha-76b4c6d959-5krt9" Feb 13 15:26:52.988267 kubelet[2787]: I0213 15:26:52.987259 2787 topology_manager.go:215] "Topology Admit Handler" podUID="eb1f3f73-da90-4d5c-b3c6-3eb4ba638822" podNamespace="calico-system" podName="calico-node-m87gv" Feb 13 15:26:53.013759 systemd[1]: Created slice kubepods-besteffort-podeb1f3f73_da90_4d5c_b3c6_3eb4ba638822.slice - libcontainer container kubepods-besteffort-podeb1f3f73_da90_4d5c_b3c6_3eb4ba638822.slice. Feb 13 15:26:53.076746 kubelet[2787]: I0213 15:26:53.075940 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-tigera-ca-bundle\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.076746 kubelet[2787]: I0213 15:26:53.076039 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-cni-log-dir\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.076746 kubelet[2787]: I0213 15:26:53.076060 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-xtables-lock\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.076746 kubelet[2787]: I0213 15:26:53.076077 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-node-certs\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.076746 kubelet[2787]: I0213 15:26:53.076094 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-var-run-calico\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.077466 kubelet[2787]: I0213 15:26:53.076109 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-cni-bin-dir\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.077466 kubelet[2787]: I0213 15:26:53.076124 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-lib-modules\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.077466 kubelet[2787]: I0213 15:26:53.076139 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-policysync\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.077466 kubelet[2787]: I0213 15:26:53.076154 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc4qr\" (UniqueName: \"kubernetes.io/projected/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-kube-api-access-xc4qr\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.077466 kubelet[2787]: I0213 15:26:53.076172 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-var-lib-calico\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.077569 kubelet[2787]: I0213 15:26:53.076187 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-cni-net-dir\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.077569 kubelet[2787]: I0213 15:26:53.076201 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eb1f3f73-da90-4d5c-b3c6-3eb4ba638822-flexvol-driver-host\") pod \"calico-node-m87gv\" (UID: \"eb1f3f73-da90-4d5c-b3c6-3eb4ba638822\") " pod="calico-system/calico-node-m87gv" Feb 13 15:26:53.100890 kubelet[2787]: I0213 15:26:53.100141 2787 topology_manager.go:215] "Topology Admit Handler" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" podNamespace="calico-system" podName="csi-node-driver-9kzwx" Feb 13 15:26:53.100890 kubelet[2787]: E0213 15:26:53.100416 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:26:53.103992 containerd[1510]: time="2025-02-13T15:26:53.103381409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76b4c6d959-5krt9,Uid:351ce921-f934-4762-b93f-05f8b9af8d84,Namespace:calico-system,Attempt:0,}" Feb 13 15:26:53.140277 containerd[1510]: time="2025-02-13T15:26:53.140111901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:26:53.140277 containerd[1510]: time="2025-02-13T15:26:53.140171982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:26:53.142058 containerd[1510]: time="2025-02-13T15:26:53.140187382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:53.143188 containerd[1510]: time="2025-02-13T15:26:53.142196258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:53.178714 kubelet[2787]: I0213 15:26:53.178678 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b-socket-dir\") pod \"csi-node-driver-9kzwx\" (UID: \"0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b\") " pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:26:53.179139 systemd[1]: Started cri-containerd-da1f976cad41d1f7c5eec0aaf4d638cdd85d81c92a7655e74f1a2b12e91ab68f.scope - libcontainer container da1f976cad41d1f7c5eec0aaf4d638cdd85d81c92a7655e74f1a2b12e91ab68f. Feb 13 15:26:53.185960 kubelet[2787]: I0213 15:26:53.182709 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b-registration-dir\") pod \"csi-node-driver-9kzwx\" (UID: \"0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b\") " pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:26:53.185960 kubelet[2787]: I0213 15:26:53.182807 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b-varrun\") pod \"csi-node-driver-9kzwx\" (UID: \"0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b\") " pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:26:53.185960 kubelet[2787]: I0213 15:26:53.182827 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b-kubelet-dir\") pod \"csi-node-driver-9kzwx\" (UID: \"0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b\") " pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:26:53.185960 kubelet[2787]: I0213 15:26:53.182879 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s5bt\" (UniqueName: \"kubernetes.io/projected/0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b-kube-api-access-6s5bt\") pod \"csi-node-driver-9kzwx\" (UID: \"0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b\") " pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:26:53.193199 kubelet[2787]: E0213 15:26:53.193170 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.193471 kubelet[2787]: W0213 15:26:53.193453 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.193626 kubelet[2787]: E0213 15:26:53.193581 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.233210 kubelet[2787]: E0213 15:26:53.233164 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.233395 kubelet[2787]: W0213 15:26:53.233380 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.233473 kubelet[2787]: E0213 15:26:53.233461 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.284164 kubelet[2787]: E0213 15:26:53.284134 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.284332 kubelet[2787]: W0213 15:26:53.284317 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.284411 kubelet[2787]: E0213 15:26:53.284393 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.284763 kubelet[2787]: E0213 15:26:53.284727 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.284941 kubelet[2787]: W0213 15:26:53.284856 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.285063 kubelet[2787]: E0213 15:26:53.285048 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.285383 kubelet[2787]: E0213 15:26:53.285369 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.285469 kubelet[2787]: W0213 15:26:53.285456 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.285594 kubelet[2787]: E0213 15:26:53.285540 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.285849 kubelet[2787]: E0213 15:26:53.285802 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.286344 kubelet[2787]: W0213 15:26:53.286157 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.286344 kubelet[2787]: E0213 15:26:53.286286 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.286613 kubelet[2787]: E0213 15:26:53.286600 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.286709 kubelet[2787]: W0213 15:26:53.286669 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.286827 kubelet[2787]: E0213 15:26:53.286815 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.287146 kubelet[2787]: E0213 15:26:53.287097 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.287146 kubelet[2787]: W0213 15:26:53.287108 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.287342 kubelet[2787]: E0213 15:26:53.287248 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.287568 kubelet[2787]: E0213 15:26:53.287542 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.287568 kubelet[2787]: W0213 15:26:53.287554 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.287747 kubelet[2787]: E0213 15:26:53.287734 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.288063 kubelet[2787]: E0213 15:26:53.288038 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.288063 kubelet[2787]: W0213 15:26:53.288050 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.288244 kubelet[2787]: E0213 15:26:53.288127 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.288472 kubelet[2787]: E0213 15:26:53.288443 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.288472 kubelet[2787]: W0213 15:26:53.288454 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.288668 kubelet[2787]: E0213 15:26:53.288577 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.289057 kubelet[2787]: E0213 15:26:53.289001 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.289278 kubelet[2787]: W0213 15:26:53.289013 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.289278 kubelet[2787]: E0213 15:26:53.289184 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.289578 kubelet[2787]: E0213 15:26:53.289535 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.289578 kubelet[2787]: W0213 15:26:53.289547 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.289742 kubelet[2787]: E0213 15:26:53.289672 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.290040 kubelet[2787]: E0213 15:26:53.289978 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.290040 kubelet[2787]: W0213 15:26:53.289991 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.290248 kubelet[2787]: E0213 15:26:53.290147 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.290408 kubelet[2787]: E0213 15:26:53.290396 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.290542 kubelet[2787]: W0213 15:26:53.290449 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.290615 kubelet[2787]: E0213 15:26:53.290601 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.290831 kubelet[2787]: E0213 15:26:53.290778 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.290831 kubelet[2787]: W0213 15:26:53.290790 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.291331 kubelet[2787]: E0213 15:26:53.291124 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.291686 kubelet[2787]: E0213 15:26:53.291670 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.291795 kubelet[2787]: W0213 15:26:53.291774 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.291997 kubelet[2787]: E0213 15:26:53.291919 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.292234 kubelet[2787]: E0213 15:26:53.292220 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.292353 kubelet[2787]: W0213 15:26:53.292304 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.292569 kubelet[2787]: E0213 15:26:53.292525 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.293575 kubelet[2787]: E0213 15:26:53.293479 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.293575 kubelet[2787]: W0213 15:26:53.293494 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.294881 kubelet[2787]: E0213 15:26:53.294756 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.294881 kubelet[2787]: W0213 15:26:53.294770 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.295726 kubelet[2787]: E0213 15:26:53.295541 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.295726 kubelet[2787]: E0213 15:26:53.295564 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.295726 kubelet[2787]: E0213 15:26:53.295620 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.295726 kubelet[2787]: W0213 15:26:53.295628 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.296155 kubelet[2787]: E0213 15:26:53.295859 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.296374 kubelet[2787]: E0213 15:26:53.296359 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.296525 kubelet[2787]: W0213 15:26:53.296445 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.296680 kubelet[2787]: E0213 15:26:53.296668 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.296810 kubelet[2787]: W0213 15:26:53.296732 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.296810 kubelet[2787]: E0213 15:26:53.296748 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.297784 containerd[1510]: time="2025-02-13T15:26:53.297339010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76b4c6d959-5krt9,Uid:351ce921-f934-4762-b93f-05f8b9af8d84,Namespace:calico-system,Attempt:0,} returns sandbox id \"da1f976cad41d1f7c5eec0aaf4d638cdd85d81c92a7655e74f1a2b12e91ab68f\"" Feb 13 15:26:53.297878 kubelet[2787]: E0213 15:26:53.297484 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.297878 kubelet[2787]: W0213 15:26:53.297496 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.297878 kubelet[2787]: E0213 15:26:53.297510 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.297878 kubelet[2787]: E0213 15:26:53.297739 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.298331 kubelet[2787]: E0213 15:26:53.298318 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.298711 kubelet[2787]: W0213 15:26:53.298609 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.298812 kubelet[2787]: E0213 15:26:53.298784 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.299564 kubelet[2787]: E0213 15:26:53.299551 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.299992 kubelet[2787]: W0213 15:26:53.299679 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.299992 kubelet[2787]: E0213 15:26:53.299700 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.300447 kubelet[2787]: E0213 15:26:53.300423 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.300541 containerd[1510]: time="2025-02-13T15:26:53.300500506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:26:53.300689 kubelet[2787]: W0213 15:26:53.300664 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.300790 kubelet[2787]: E0213 15:26:53.300775 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.313347 kubelet[2787]: E0213 15:26:53.313308 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:53.313584 kubelet[2787]: W0213 15:26:53.313554 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:53.313773 kubelet[2787]: E0213 15:26:53.313706 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:53.318569 containerd[1510]: time="2025-02-13T15:26:53.318113139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m87gv,Uid:eb1f3f73-da90-4d5c-b3c6-3eb4ba638822,Namespace:calico-system,Attempt:0,}" Feb 13 15:26:53.340840 containerd[1510]: time="2025-02-13T15:26:53.340598497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:26:53.340840 containerd[1510]: time="2025-02-13T15:26:53.340686219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:26:53.340840 containerd[1510]: time="2025-02-13T15:26:53.340801781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:53.341166 containerd[1510]: time="2025-02-13T15:26:53.341062066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:53.363141 systemd[1]: Started cri-containerd-00ffefa689b9391f089b13613eee6dab447e23e9a397b5e52c62fb0fcd20269c.scope - libcontainer container 00ffefa689b9391f089b13613eee6dab447e23e9a397b5e52c62fb0fcd20269c. Feb 13 15:26:53.402086 containerd[1510]: time="2025-02-13T15:26:53.401735502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m87gv,Uid:eb1f3f73-da90-4d5c-b3c6-3eb4ba638822,Namespace:calico-system,Attempt:0,} returns sandbox id \"00ffefa689b9391f089b13613eee6dab447e23e9a397b5e52c62fb0fcd20269c\"" Feb 13 15:26:54.432796 kubelet[2787]: E0213 15:26:54.431220 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:26:54.950675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57723432.mount: Deactivated successfully. Feb 13 15:26:55.968012 containerd[1510]: time="2025-02-13T15:26:55.967244344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:55.968485 containerd[1510]: time="2025-02-13T15:26:55.968349683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 15:26:55.969365 containerd[1510]: time="2025-02-13T15:26:55.969326499Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:55.972524 containerd[1510]: time="2025-02-13T15:26:55.972472232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:55.973767 containerd[1510]: time="2025-02-13T15:26:55.973624452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.673079545s" Feb 13 15:26:55.973767 containerd[1510]: time="2025-02-13T15:26:55.973669773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 15:26:55.976400 containerd[1510]: time="2025-02-13T15:26:55.975081396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:26:55.989242 containerd[1510]: time="2025-02-13T15:26:55.989196874Z" level=info msg="CreateContainer within sandbox \"da1f976cad41d1f7c5eec0aaf4d638cdd85d81c92a7655e74f1a2b12e91ab68f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:26:56.012945 containerd[1510]: time="2025-02-13T15:26:56.012808187Z" level=info msg="CreateContainer within sandbox \"da1f976cad41d1f7c5eec0aaf4d638cdd85d81c92a7655e74f1a2b12e91ab68f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8be50434645281643b9ee40abbe7b9c13a3cb81b97fdeaf0315658cc11450122\"" Feb 13 15:26:56.015965 containerd[1510]: time="2025-02-13T15:26:56.015537472Z" level=info msg="StartContainer for \"8be50434645281643b9ee40abbe7b9c13a3cb81b97fdeaf0315658cc11450122\"" Feb 13 15:26:56.044111 systemd[1]: Started cri-containerd-8be50434645281643b9ee40abbe7b9c13a3cb81b97fdeaf0315658cc11450122.scope - libcontainer container 8be50434645281643b9ee40abbe7b9c13a3cb81b97fdeaf0315658cc11450122. Feb 13 15:26:56.080683 containerd[1510]: time="2025-02-13T15:26:56.080639221Z" level=info msg="StartContainer for \"8be50434645281643b9ee40abbe7b9c13a3cb81b97fdeaf0315658cc11450122\" returns successfully" Feb 13 15:26:56.431170 kubelet[2787]: E0213 15:26:56.430722 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:26:56.592111 kubelet[2787]: E0213 15:26:56.592018 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.592111 kubelet[2787]: W0213 15:26:56.592042 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.592111 kubelet[2787]: E0213 15:26:56.592062 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.592744 kubelet[2787]: E0213 15:26:56.592591 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.592744 kubelet[2787]: W0213 15:26:56.592606 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.592744 kubelet[2787]: E0213 15:26:56.592618 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.593129 kubelet[2787]: E0213 15:26:56.593020 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.593129 kubelet[2787]: W0213 15:26:56.593033 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.593129 kubelet[2787]: E0213 15:26:56.593045 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.593591 kubelet[2787]: E0213 15:26:56.593482 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.593591 kubelet[2787]: W0213 15:26:56.593495 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.593591 kubelet[2787]: E0213 15:26:56.593506 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.593988 kubelet[2787]: E0213 15:26:56.593809 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.593988 kubelet[2787]: W0213 15:26:56.593821 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.593988 kubelet[2787]: E0213 15:26:56.593831 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.594197 kubelet[2787]: E0213 15:26:56.594136 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.594197 kubelet[2787]: W0213 15:26:56.594149 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.594197 kubelet[2787]: E0213 15:26:56.594159 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.594595 kubelet[2787]: E0213 15:26:56.594437 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.594595 kubelet[2787]: W0213 15:26:56.594449 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.594595 kubelet[2787]: E0213 15:26:56.594458 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.595174 kubelet[2787]: E0213 15:26:56.594718 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.595174 kubelet[2787]: W0213 15:26:56.594729 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.595174 kubelet[2787]: E0213 15:26:56.594738 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.595610 kubelet[2787]: E0213 15:26:56.595594 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.595741 kubelet[2787]: W0213 15:26:56.595723 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.595819 kubelet[2787]: E0213 15:26:56.595804 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.596159 kubelet[2787]: E0213 15:26:56.596143 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.596292 kubelet[2787]: W0213 15:26:56.596231 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.596292 kubelet[2787]: E0213 15:26:56.596250 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.596901 kubelet[2787]: E0213 15:26:56.596740 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.596901 kubelet[2787]: W0213 15:26:56.596754 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.596901 kubelet[2787]: E0213 15:26:56.596781 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.597966 kubelet[2787]: E0213 15:26:56.597855 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.598189 kubelet[2787]: W0213 15:26:56.598057 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.598189 kubelet[2787]: E0213 15:26:56.598080 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.598474 kubelet[2787]: E0213 15:26:56.598402 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.598474 kubelet[2787]: W0213 15:26:56.598416 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.598474 kubelet[2787]: E0213 15:26:56.598427 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.598925 kubelet[2787]: E0213 15:26:56.598735 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.598925 kubelet[2787]: W0213 15:26:56.598747 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.598925 kubelet[2787]: E0213 15:26:56.598759 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.599182 kubelet[2787]: E0213 15:26:56.599169 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.599316 kubelet[2787]: W0213 15:26:56.599244 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.599316 kubelet[2787]: E0213 15:26:56.599261 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.613143 kubelet[2787]: E0213 15:26:56.613079 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.613429 kubelet[2787]: W0213 15:26:56.613304 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.613429 kubelet[2787]: E0213 15:26:56.613335 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.613808 kubelet[2787]: E0213 15:26:56.613720 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.613808 kubelet[2787]: W0213 15:26:56.613734 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.613808 kubelet[2787]: E0213 15:26:56.613748 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.614338 kubelet[2787]: E0213 15:26:56.614307 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.614784 kubelet[2787]: W0213 15:26:56.614513 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.614784 kubelet[2787]: E0213 15:26:56.614566 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.615506 kubelet[2787]: E0213 15:26:56.615376 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.615506 kubelet[2787]: W0213 15:26:56.615401 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.615506 kubelet[2787]: E0213 15:26:56.615440 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.616429 kubelet[2787]: E0213 15:26:56.616137 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.616429 kubelet[2787]: W0213 15:26:56.616163 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.616429 kubelet[2787]: E0213 15:26:56.616274 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.616869 kubelet[2787]: E0213 15:26:56.616817 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.616869 kubelet[2787]: W0213 15:26:56.616849 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.617029 kubelet[2787]: E0213 15:26:56.616974 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.617430 kubelet[2787]: E0213 15:26:56.617384 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.617430 kubelet[2787]: W0213 15:26:56.617417 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.617536 kubelet[2787]: E0213 15:26:56.617512 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.618054 kubelet[2787]: E0213 15:26:56.617891 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.618054 kubelet[2787]: W0213 15:26:56.618059 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.618187 kubelet[2787]: E0213 15:26:56.618163 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.618525 kubelet[2787]: E0213 15:26:56.618486 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.618525 kubelet[2787]: W0213 15:26:56.618511 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.618780 kubelet[2787]: E0213 15:26:56.618670 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.619061 kubelet[2787]: E0213 15:26:56.619014 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.619061 kubelet[2787]: W0213 15:26:56.619044 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.619183 kubelet[2787]: E0213 15:26:56.619089 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.619544 kubelet[2787]: E0213 15:26:56.619519 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.619609 kubelet[2787]: W0213 15:26:56.619545 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.619768 kubelet[2787]: E0213 15:26:56.619717 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.620143 kubelet[2787]: E0213 15:26:56.620115 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.620225 kubelet[2787]: W0213 15:26:56.620145 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.620225 kubelet[2787]: E0213 15:26:56.620187 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.620870 kubelet[2787]: E0213 15:26:56.620842 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.621005 kubelet[2787]: W0213 15:26:56.620872 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.621082 kubelet[2787]: E0213 15:26:56.621051 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.621572 kubelet[2787]: E0213 15:26:56.621546 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.621629 kubelet[2787]: W0213 15:26:56.621605 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.622019 kubelet[2787]: E0213 15:26:56.621716 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.623407 kubelet[2787]: E0213 15:26:56.623039 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.623407 kubelet[2787]: W0213 15:26:56.623057 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.623407 kubelet[2787]: E0213 15:26:56.623151 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.624064 kubelet[2787]: E0213 15:26:56.624030 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.624064 kubelet[2787]: W0213 15:26:56.624055 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.624160 kubelet[2787]: E0213 15:26:56.624071 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.624849 kubelet[2787]: E0213 15:26:56.624821 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.624849 kubelet[2787]: W0213 15:26:56.624839 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.624849 kubelet[2787]: E0213 15:26:56.624854 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:56.625721 kubelet[2787]: E0213 15:26:56.625675 2787 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:26:56.625721 kubelet[2787]: W0213 15:26:56.625693 2787 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:26:56.625721 kubelet[2787]: E0213 15:26:56.625714 2787 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:26:57.455970 containerd[1510]: time="2025-02-13T15:26:57.455739225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:57.457146 containerd[1510]: time="2025-02-13T15:26:57.457094647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 15:26:57.458275 containerd[1510]: time="2025-02-13T15:26:57.458219345Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:57.461006 containerd[1510]: time="2025-02-13T15:26:57.460974869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:57.462951 containerd[1510]: time="2025-02-13T15:26:57.461748601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.486630124s" Feb 13 15:26:57.462951 containerd[1510]: time="2025-02-13T15:26:57.461781282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 15:26:57.466143 containerd[1510]: time="2025-02-13T15:26:57.466092391Z" level=info msg="CreateContainer within sandbox \"00ffefa689b9391f089b13613eee6dab447e23e9a397b5e52c62fb0fcd20269c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:26:57.490447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3363386170.mount: Deactivated successfully. Feb 13 15:26:57.492156 containerd[1510]: time="2025-02-13T15:26:57.492106607Z" level=info msg="CreateContainer within sandbox \"00ffefa689b9391f089b13613eee6dab447e23e9a397b5e52c62fb0fcd20269c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b6115a2cadbbd5b7b52b00d49fca1ebaf82614543449c4211015a178c57dc51b\"" Feb 13 15:26:57.494784 containerd[1510]: time="2025-02-13T15:26:57.494743930Z" level=info msg="StartContainer for \"b6115a2cadbbd5b7b52b00d49fca1ebaf82614543449c4211015a178c57dc51b\"" Feb 13 15:26:57.532198 systemd[1]: Started cri-containerd-b6115a2cadbbd5b7b52b00d49fca1ebaf82614543449c4211015a178c57dc51b.scope - libcontainer container b6115a2cadbbd5b7b52b00d49fca1ebaf82614543449c4211015a178c57dc51b. Feb 13 15:26:57.567650 containerd[1510]: time="2025-02-13T15:26:57.567551256Z" level=info msg="StartContainer for \"b6115a2cadbbd5b7b52b00d49fca1ebaf82614543449c4211015a178c57dc51b\" returns successfully" Feb 13 15:26:57.570867 kubelet[2787]: I0213 15:26:57.570824 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:26:57.588981 systemd[1]: cri-containerd-b6115a2cadbbd5b7b52b00d49fca1ebaf82614543449c4211015a178c57dc51b.scope: Deactivated successfully. Feb 13 15:26:57.617961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6115a2cadbbd5b7b52b00d49fca1ebaf82614543449c4211015a178c57dc51b-rootfs.mount: Deactivated successfully. Feb 13 15:26:57.720723 containerd[1510]: time="2025-02-13T15:26:57.720322543Z" level=info msg="shim disconnected" id=b6115a2cadbbd5b7b52b00d49fca1ebaf82614543449c4211015a178c57dc51b namespace=k8s.io Feb 13 15:26:57.720723 containerd[1510]: time="2025-02-13T15:26:57.720442585Z" level=warning msg="cleaning up after shim disconnected" id=b6115a2cadbbd5b7b52b00d49fca1ebaf82614543449c4211015a178c57dc51b namespace=k8s.io Feb 13 15:26:57.720723 containerd[1510]: time="2025-02-13T15:26:57.720456985Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:26:57.738063 containerd[1510]: time="2025-02-13T15:26:57.737788943Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:26:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:26:58.431997 kubelet[2787]: E0213 15:26:58.430648 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:26:58.577310 containerd[1510]: time="2025-02-13T15:26:58.577261800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:26:58.610959 kubelet[2787]: I0213 15:26:58.609749 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76b4c6d959-5krt9" podStartSLOduration=3.935103095 podStartE2EDuration="6.609730507s" podCreationTimestamp="2025-02-13 15:26:52 +0000 UTC" firstStartedPulling="2025-02-13 15:26:53.3001797 +0000 UTC m=+22.970311101" lastFinishedPulling="2025-02-13 15:26:55.974807072 +0000 UTC m=+25.644938513" observedRunningTime="2025-02-13 15:26:56.581340847 +0000 UTC m=+26.251472288" watchObservedRunningTime="2025-02-13 15:26:58.609730507 +0000 UTC m=+28.279861988" Feb 13 15:27:00.431756 kubelet[2787]: E0213 15:27:00.430535 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:27:01.382965 containerd[1510]: time="2025-02-13T15:27:01.382875694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:01.385324 containerd[1510]: time="2025-02-13T15:27:01.385258449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 15:27:01.386367 containerd[1510]: time="2025-02-13T15:27:01.386125861Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:01.391313 containerd[1510]: time="2025-02-13T15:27:01.391253776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:01.395075 containerd[1510]: time="2025-02-13T15:27:01.393848813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.816547812s" Feb 13 15:27:01.395075 containerd[1510]: time="2025-02-13T15:27:01.393929335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 15:27:01.403967 containerd[1510]: time="2025-02-13T15:27:01.403570594Z" level=info msg="CreateContainer within sandbox \"00ffefa689b9391f089b13613eee6dab447e23e9a397b5e52c62fb0fcd20269c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:27:01.428145 containerd[1510]: time="2025-02-13T15:27:01.428077910Z" level=info msg="CreateContainer within sandbox \"00ffefa689b9391f089b13613eee6dab447e23e9a397b5e52c62fb0fcd20269c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"87cda7f41c81f1934822ed5ea26ef3cedafdfd97f964ee2723ce0fab2a89803c\"" Feb 13 15:27:01.430091 containerd[1510]: time="2025-02-13T15:27:01.430046858Z" level=info msg="StartContainer for \"87cda7f41c81f1934822ed5ea26ef3cedafdfd97f964ee2723ce0fab2a89803c\"" Feb 13 15:27:01.481740 systemd[1]: Started cri-containerd-87cda7f41c81f1934822ed5ea26ef3cedafdfd97f964ee2723ce0fab2a89803c.scope - libcontainer container 87cda7f41c81f1934822ed5ea26ef3cedafdfd97f964ee2723ce0fab2a89803c. Feb 13 15:27:01.544429 containerd[1510]: time="2025-02-13T15:27:01.544017231Z" level=info msg="StartContainer for \"87cda7f41c81f1934822ed5ea26ef3cedafdfd97f964ee2723ce0fab2a89803c\" returns successfully" Feb 13 15:27:02.174376 containerd[1510]: time="2025-02-13T15:27:02.174282470Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:27:02.177455 systemd[1]: cri-containerd-87cda7f41c81f1934822ed5ea26ef3cedafdfd97f964ee2723ce0fab2a89803c.scope: Deactivated successfully. Feb 13 15:27:02.178104 systemd[1]: cri-containerd-87cda7f41c81f1934822ed5ea26ef3cedafdfd97f964ee2723ce0fab2a89803c.scope: Consumed 565ms CPU time, 173.2M memory peak, 147.4M written to disk. Feb 13 15:27:02.183464 kubelet[2787]: I0213 15:27:02.183433 2787 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:27:02.209764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87cda7f41c81f1934822ed5ea26ef3cedafdfd97f964ee2723ce0fab2a89803c-rootfs.mount: Deactivated successfully. Feb 13 15:27:02.227046 kubelet[2787]: I0213 15:27:02.226401 2787 topology_manager.go:215] "Topology Admit Handler" podUID="6bd1757e-903b-41e7-a1bd-8c95ee35dbf5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-77zlz" Feb 13 15:27:02.236176 systemd[1]: Created slice kubepods-burstable-pod6bd1757e_903b_41e7_a1bd_8c95ee35dbf5.slice - libcontainer container kubepods-burstable-pod6bd1757e_903b_41e7_a1bd_8c95ee35dbf5.slice. Feb 13 15:27:02.251197 kubelet[2787]: I0213 15:27:02.245438 2787 topology_manager.go:215] "Topology Admit Handler" podUID="09d81451-756a-493d-9c79-aa5ef1100e0d" podNamespace="calico-apiserver" podName="calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:02.251197 kubelet[2787]: I0213 15:27:02.245667 2787 topology_manager.go:215] "Topology Admit Handler" podUID="ea674312-9393-4b86-8bbb-6e3a7a4e2c23" podNamespace="calico-apiserver" podName="calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:02.251197 kubelet[2787]: I0213 15:27:02.247481 2787 topology_manager.go:215] "Topology Admit Handler" podUID="b22d464a-68ec-4133-9687-c90c18294db8" podNamespace="calico-system" podName="calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:02.251197 kubelet[2787]: I0213 15:27:02.247649 2787 topology_manager.go:215] "Topology Admit Handler" podUID="400c597f-5d24-4504-a9fc-e7fa6fcc44df" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:02.260186 kubelet[2787]: I0213 15:27:02.260007 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/09d81451-756a-493d-9c79-aa5ef1100e0d-calico-apiserver-certs\") pod \"calico-apiserver-7688c5dd9f-n9z22\" (UID: \"09d81451-756a-493d-9c79-aa5ef1100e0d\") " pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:02.260186 kubelet[2787]: I0213 15:27:02.260042 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44q2p\" (UniqueName: \"kubernetes.io/projected/09d81451-756a-493d-9c79-aa5ef1100e0d-kube-api-access-44q2p\") pod \"calico-apiserver-7688c5dd9f-n9z22\" (UID: \"09d81451-756a-493d-9c79-aa5ef1100e0d\") " pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:02.260186 kubelet[2787]: I0213 15:27:02.260074 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6bd1757e-903b-41e7-a1bd-8c95ee35dbf5-config-volume\") pod \"coredns-7db6d8ff4d-77zlz\" (UID: \"6bd1757e-903b-41e7-a1bd-8c95ee35dbf5\") " pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:02.260186 kubelet[2787]: I0213 15:27:02.260097 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtz8h\" (UniqueName: \"kubernetes.io/projected/6bd1757e-903b-41e7-a1bd-8c95ee35dbf5-kube-api-access-rtz8h\") pod \"coredns-7db6d8ff4d-77zlz\" (UID: \"6bd1757e-903b-41e7-a1bd-8c95ee35dbf5\") " pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:02.263378 systemd[1]: Created slice kubepods-besteffort-podea674312_9393_4b86_8bbb_6e3a7a4e2c23.slice - libcontainer container kubepods-besteffort-podea674312_9393_4b86_8bbb_6e3a7a4e2c23.slice. Feb 13 15:27:02.269088 kubelet[2787]: W0213 15:27:02.268471 2787 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4230-0-1-9-12db063e25" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4230-0-1-9-12db063e25' and this object Feb 13 15:27:02.272047 kubelet[2787]: E0213 15:27:02.271252 2787 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4230-0-1-9-12db063e25" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4230-0-1-9-12db063e25' and this object Feb 13 15:27:02.272047 kubelet[2787]: W0213 15:27:02.269701 2787 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-0-1-9-12db063e25" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4230-0-1-9-12db063e25' and this object Feb 13 15:27:02.272047 kubelet[2787]: E0213 15:27:02.271299 2787 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4230-0-1-9-12db063e25" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4230-0-1-9-12db063e25' and this object Feb 13 15:27:02.277846 systemd[1]: Created slice kubepods-besteffort-pod09d81451_756a_493d_9c79_aa5ef1100e0d.slice - libcontainer container kubepods-besteffort-pod09d81451_756a_493d_9c79_aa5ef1100e0d.slice. Feb 13 15:27:02.286164 systemd[1]: Created slice kubepods-burstable-pod400c597f_5d24_4504_a9fc_e7fa6fcc44df.slice - libcontainer container kubepods-burstable-pod400c597f_5d24_4504_a9fc_e7fa6fcc44df.slice. Feb 13 15:27:02.294462 systemd[1]: Created slice kubepods-besteffort-podb22d464a_68ec_4133_9687_c90c18294db8.slice - libcontainer container kubepods-besteffort-podb22d464a_68ec_4133_9687_c90c18294db8.slice. Feb 13 15:27:02.330697 containerd[1510]: time="2025-02-13T15:27:02.330577281Z" level=info msg="shim disconnected" id=87cda7f41c81f1934822ed5ea26ef3cedafdfd97f964ee2723ce0fab2a89803c namespace=k8s.io Feb 13 15:27:02.330697 containerd[1510]: time="2025-02-13T15:27:02.330647162Z" level=warning msg="cleaning up after shim disconnected" id=87cda7f41c81f1934822ed5ea26ef3cedafdfd97f964ee2723ce0fab2a89803c namespace=k8s.io Feb 13 15:27:02.330697 containerd[1510]: time="2025-02-13T15:27:02.330656843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:02.347950 containerd[1510]: time="2025-02-13T15:27:02.347160556Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:27:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:27:02.361262 kubelet[2787]: I0213 15:27:02.361206 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2t8m\" (UniqueName: \"kubernetes.io/projected/400c597f-5d24-4504-a9fc-e7fa6fcc44df-kube-api-access-b2t8m\") pod \"coredns-7db6d8ff4d-8cbdv\" (UID: \"400c597f-5d24-4504-a9fc-e7fa6fcc44df\") " pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:02.361456 kubelet[2787]: I0213 15:27:02.361324 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b22d464a-68ec-4133-9687-c90c18294db8-tigera-ca-bundle\") pod \"calico-kube-controllers-788df79c7b-m4kpj\" (UID: \"b22d464a-68ec-4133-9687-c90c18294db8\") " pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:02.361456 kubelet[2787]: I0213 15:27:02.361353 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g9fk\" (UniqueName: \"kubernetes.io/projected/b22d464a-68ec-4133-9687-c90c18294db8-kube-api-access-2g9fk\") pod \"calico-kube-controllers-788df79c7b-m4kpj\" (UID: \"b22d464a-68ec-4133-9687-c90c18294db8\") " pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:02.361456 kubelet[2787]: I0213 15:27:02.361396 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea674312-9393-4b86-8bbb-6e3a7a4e2c23-calico-apiserver-certs\") pod \"calico-apiserver-7688c5dd9f-wlqt5\" (UID: \"ea674312-9393-4b86-8bbb-6e3a7a4e2c23\") " pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:02.361456 kubelet[2787]: I0213 15:27:02.361421 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zschw\" (UniqueName: \"kubernetes.io/projected/ea674312-9393-4b86-8bbb-6e3a7a4e2c23-kube-api-access-zschw\") pod \"calico-apiserver-7688c5dd9f-wlqt5\" (UID: \"ea674312-9393-4b86-8bbb-6e3a7a4e2c23\") " pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:02.361456 kubelet[2787]: I0213 15:27:02.361442 2787 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/400c597f-5d24-4504-a9fc-e7fa6fcc44df-config-volume\") pod \"coredns-7db6d8ff4d-8cbdv\" (UID: \"400c597f-5d24-4504-a9fc-e7fa6fcc44df\") " pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:02.438730 systemd[1]: Created slice kubepods-besteffort-pod0b9e30d9_f00b_49ef_83d6_ebd7f08b8f7b.slice - libcontainer container kubepods-besteffort-pod0b9e30d9_f00b_49ef_83d6_ebd7f08b8f7b.slice. Feb 13 15:27:02.442063 containerd[1510]: time="2025-02-13T15:27:02.441633173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:0,}" Feb 13 15:27:02.553228 containerd[1510]: time="2025-02-13T15:27:02.553185232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:02.554777 containerd[1510]: time="2025-02-13T15:27:02.554482730Z" level=error msg="Failed to destroy network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.556599 containerd[1510]: time="2025-02-13T15:27:02.556319156Z" level=error msg="encountered an error cleaning up failed sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.556599 containerd[1510]: time="2025-02-13T15:27:02.556435878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.557890 kubelet[2787]: E0213 15:27:02.557646 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.557890 kubelet[2787]: E0213 15:27:02.557743 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:02.557890 kubelet[2787]: E0213 15:27:02.557768 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:02.560666 kubelet[2787]: E0213 15:27:02.557836 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:27:02.559750 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f-shm.mount: Deactivated successfully. Feb 13 15:27:02.590389 containerd[1510]: time="2025-02-13T15:27:02.590344277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:02.601010 kubelet[2787]: I0213 15:27:02.600659 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f" Feb 13 15:27:02.602357 containerd[1510]: time="2025-02-13T15:27:02.602149524Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\"" Feb 13 15:27:02.602357 containerd[1510]: time="2025-02-13T15:27:02.602348727Z" level=info msg="Ensure that sandbox 2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f in task-service has been cleanup successfully" Feb 13 15:27:02.604615 containerd[1510]: time="2025-02-13T15:27:02.604457917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:0,}" Feb 13 15:27:02.605725 containerd[1510]: time="2025-02-13T15:27:02.605692615Z" level=info msg="TearDown network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" successfully" Feb 13 15:27:02.605725 containerd[1510]: time="2025-02-13T15:27:02.605720935Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" returns successfully" Feb 13 15:27:02.608267 containerd[1510]: time="2025-02-13T15:27:02.608220050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:1,}" Feb 13 15:27:02.616087 containerd[1510]: time="2025-02-13T15:27:02.616034641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:27:02.656441 containerd[1510]: time="2025-02-13T15:27:02.656301371Z" level=error msg="Failed to destroy network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.656954 containerd[1510]: time="2025-02-13T15:27:02.656860419Z" level=error msg="encountered an error cleaning up failed sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.657312 containerd[1510]: time="2025-02-13T15:27:02.657199023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.658927 kubelet[2787]: E0213 15:27:02.658869 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.659580 kubelet[2787]: E0213 15:27:02.659142 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:02.659580 kubelet[2787]: E0213 15:27:02.659172 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:02.659580 kubelet[2787]: E0213 15:27:02.659221 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-77zlz" podUID="6bd1757e-903b-41e7-a1bd-8c95ee35dbf5" Feb 13 15:27:02.755852 containerd[1510]: time="2025-02-13T15:27:02.755227531Z" level=error msg="Failed to destroy network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.759572 containerd[1510]: time="2025-02-13T15:27:02.759518671Z" level=error msg="Failed to destroy network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.759751 containerd[1510]: time="2025-02-13T15:27:02.759730794Z" level=error msg="encountered an error cleaning up failed sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.759818 containerd[1510]: time="2025-02-13T15:27:02.759793035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.760143 kubelet[2787]: E0213 15:27:02.760036 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.760143 kubelet[2787]: E0213 15:27:02.760134 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:02.760304 kubelet[2787]: E0213 15:27:02.760158 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:02.760304 kubelet[2787]: E0213 15:27:02.760203 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8cbdv" podUID="400c597f-5d24-4504-a9fc-e7fa6fcc44df" Feb 13 15:27:02.761573 containerd[1510]: time="2025-02-13T15:27:02.761470979Z" level=error msg="encountered an error cleaning up failed sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.761573 containerd[1510]: time="2025-02-13T15:27:02.761557300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.762323 kubelet[2787]: E0213 15:27:02.761860 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.762323 kubelet[2787]: E0213 15:27:02.761983 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:02.762323 kubelet[2787]: E0213 15:27:02.762025 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:02.762444 kubelet[2787]: E0213 15:27:02.762088 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" podUID="b22d464a-68ec-4133-9687-c90c18294db8" Feb 13 15:27:02.778757 containerd[1510]: time="2025-02-13T15:27:02.778510060Z" level=error msg="Failed to destroy network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.779265 containerd[1510]: time="2025-02-13T15:27:02.779127189Z" level=error msg="encountered an error cleaning up failed sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.779265 containerd[1510]: time="2025-02-13T15:27:02.779210510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.780037 kubelet[2787]: E0213 15:27:02.779652 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:02.780037 kubelet[2787]: E0213 15:27:02.779704 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:02.780037 kubelet[2787]: E0213 15:27:02.779726 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:02.780174 kubelet[2787]: E0213 15:27:02.779766 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:27:03.364870 kubelet[2787]: E0213 15:27:03.364464 2787 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 15:27:03.364870 kubelet[2787]: E0213 15:27:03.364555 2787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09d81451-756a-493d-9c79-aa5ef1100e0d-calico-apiserver-certs podName:09d81451-756a-493d-9c79-aa5ef1100e0d nodeName:}" failed. No retries permitted until 2025-02-13 15:27:03.86453403 +0000 UTC m=+33.534665471 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/09d81451-756a-493d-9c79-aa5ef1100e0d-calico-apiserver-certs") pod "calico-apiserver-7688c5dd9f-n9z22" (UID: "09d81451-756a-493d-9c79-aa5ef1100e0d") : failed to sync secret cache: timed out waiting for the condition Feb 13 15:27:03.463594 systemd[1]: run-netns-cni\x2d03dff595\x2d2509\x2debb5\x2d053b\x2d395fb2a01eda.mount: Deactivated successfully. Feb 13 15:27:03.465408 kubelet[2787]: E0213 15:27:03.465170 2787 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 15:27:03.465408 kubelet[2787]: E0213 15:27:03.465249 2787 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea674312-9393-4b86-8bbb-6e3a7a4e2c23-calico-apiserver-certs podName:ea674312-9393-4b86-8bbb-6e3a7a4e2c23 nodeName:}" failed. No retries permitted until 2025-02-13 15:27:03.965230301 +0000 UTC m=+33.635361742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ea674312-9393-4b86-8bbb-6e3a7a4e2c23-calico-apiserver-certs") pod "calico-apiserver-7688c5dd9f-wlqt5" (UID: "ea674312-9393-4b86-8bbb-6e3a7a4e2c23") : failed to sync secret cache: timed out waiting for the condition Feb 13 15:27:03.612970 kubelet[2787]: I0213 15:27:03.612613 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62" Feb 13 15:27:03.614198 containerd[1510]: time="2025-02-13T15:27:03.613653991Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\"" Feb 13 15:27:03.614198 containerd[1510]: time="2025-02-13T15:27:03.614025756Z" level=info msg="Ensure that sandbox ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62 in task-service has been cleanup successfully" Feb 13 15:27:03.615147 containerd[1510]: time="2025-02-13T15:27:03.614934608Z" level=info msg="TearDown network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" successfully" Feb 13 15:27:03.615147 containerd[1510]: time="2025-02-13T15:27:03.614956929Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" returns successfully" Feb 13 15:27:03.617618 systemd[1]: run-netns-cni\x2d75890e01\x2d2a43\x2d311e\x2d61a4\x2d699a40fdd62d.mount: Deactivated successfully. Feb 13 15:27:03.619865 containerd[1510]: time="2025-02-13T15:27:03.618747341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:1,}" Feb 13 15:27:03.621417 kubelet[2787]: I0213 15:27:03.621190 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e" Feb 13 15:27:03.623181 containerd[1510]: time="2025-02-13T15:27:03.623151682Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\"" Feb 13 15:27:03.623659 kubelet[2787]: I0213 15:27:03.623633 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0" Feb 13 15:27:03.624760 containerd[1510]: time="2025-02-13T15:27:03.624730744Z" level=info msg="Ensure that sandbox ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e in task-service has been cleanup successfully" Feb 13 15:27:03.626673 containerd[1510]: time="2025-02-13T15:27:03.625571235Z" level=info msg="TearDown network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" successfully" Feb 13 15:27:03.626673 containerd[1510]: time="2025-02-13T15:27:03.625595795Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" returns successfully" Feb 13 15:27:03.626673 containerd[1510]: time="2025-02-13T15:27:03.625608756Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\"" Feb 13 15:27:03.626673 containerd[1510]: time="2025-02-13T15:27:03.625743318Z" level=info msg="Ensure that sandbox dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0 in task-service has been cleanup successfully" Feb 13 15:27:03.628311 containerd[1510]: time="2025-02-13T15:27:03.628274712Z" level=info msg="TearDown network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" successfully" Feb 13 15:27:03.629238 containerd[1510]: time="2025-02-13T15:27:03.628434555Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" returns successfully" Feb 13 15:27:03.629781 systemd[1]: run-netns-cni\x2d1273009c\x2dfaa6\x2d0910\x2ded5d\x2db368fc5918e1.mount: Deactivated successfully. Feb 13 15:27:03.630269 systemd[1]: run-netns-cni\x2d7eca9d50\x2d6e22\x2d20ed\x2d17ec\x2d271978afd8f7.mount: Deactivated successfully. Feb 13 15:27:03.632695 containerd[1510]: time="2025-02-13T15:27:03.631409876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:1,}" Feb 13 15:27:03.634795 containerd[1510]: time="2025-02-13T15:27:03.634748642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:1,}" Feb 13 15:27:03.635494 kubelet[2787]: I0213 15:27:03.635466 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a" Feb 13 15:27:03.636240 containerd[1510]: time="2025-02-13T15:27:03.636208702Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\"" Feb 13 15:27:03.636964 containerd[1510]: time="2025-02-13T15:27:03.636642908Z" level=info msg="Ensure that sandbox e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a in task-service has been cleanup successfully" Feb 13 15:27:03.640512 containerd[1510]: time="2025-02-13T15:27:03.638810098Z" level=info msg="TearDown network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" successfully" Feb 13 15:27:03.640512 containerd[1510]: time="2025-02-13T15:27:03.638961140Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" returns successfully" Feb 13 15:27:03.641599 containerd[1510]: time="2025-02-13T15:27:03.641559336Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\"" Feb 13 15:27:03.642082 containerd[1510]: time="2025-02-13T15:27:03.642014822Z" level=info msg="TearDown network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" successfully" Feb 13 15:27:03.642082 containerd[1510]: time="2025-02-13T15:27:03.642041343Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" returns successfully" Feb 13 15:27:03.643607 systemd[1]: run-netns-cni\x2dcc9ee1cd\x2d39f6\x2dbdb2\x2d4edf\x2dbac5e95200c6.mount: Deactivated successfully. Feb 13 15:27:03.644076 containerd[1510]: time="2025-02-13T15:27:03.643986810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:2,}" Feb 13 15:27:03.809260 containerd[1510]: time="2025-02-13T15:27:03.809116450Z" level=error msg="Failed to destroy network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.809609 containerd[1510]: time="2025-02-13T15:27:03.809584097Z" level=error msg="encountered an error cleaning up failed sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.809923 containerd[1510]: time="2025-02-13T15:27:03.809721779Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.810071 kubelet[2787]: E0213 15:27:03.809957 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.810071 kubelet[2787]: E0213 15:27:03.810023 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:03.810071 kubelet[2787]: E0213 15:27:03.810046 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:03.810456 kubelet[2787]: E0213 15:27:03.810101 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" podUID="b22d464a-68ec-4133-9687-c90c18294db8" Feb 13 15:27:03.823297 containerd[1510]: time="2025-02-13T15:27:03.823211245Z" level=error msg="Failed to destroy network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.823816 containerd[1510]: time="2025-02-13T15:27:03.823781293Z" level=error msg="encountered an error cleaning up failed sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.824132 containerd[1510]: time="2025-02-13T15:27:03.824062177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.824364 containerd[1510]: time="2025-02-13T15:27:03.824318340Z" level=error msg="Failed to destroy network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.824806 containerd[1510]: time="2025-02-13T15:27:03.824742066Z" level=error msg="encountered an error cleaning up failed sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.825011 containerd[1510]: time="2025-02-13T15:27:03.824898428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.827011 kubelet[2787]: E0213 15:27:03.825351 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.827011 kubelet[2787]: E0213 15:27:03.825406 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:03.827011 kubelet[2787]: E0213 15:27:03.825425 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:03.827011 kubelet[2787]: E0213 15:27:03.825362 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.827241 containerd[1510]: time="2025-02-13T15:27:03.825602318Z" level=error msg="Failed to destroy network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.827241 containerd[1510]: time="2025-02-13T15:27:03.826187846Z" level=error msg="encountered an error cleaning up failed sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.827241 containerd[1510]: time="2025-02-13T15:27:03.826284287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.828640 kubelet[2787]: E0213 15:27:03.825470 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8cbdv" podUID="400c597f-5d24-4504-a9fc-e7fa6fcc44df" Feb 13 15:27:03.828640 kubelet[2787]: E0213 15:27:03.825483 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:03.828640 kubelet[2787]: E0213 15:27:03.825506 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:03.828858 kubelet[2787]: E0213 15:27:03.825540 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:27:03.828858 kubelet[2787]: E0213 15:27:03.826472 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:03.828858 kubelet[2787]: E0213 15:27:03.826515 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:03.829101 kubelet[2787]: E0213 15:27:03.826532 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:03.829101 kubelet[2787]: E0213 15:27:03.826563 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-77zlz" podUID="6bd1757e-903b-41e7-a1bd-8c95ee35dbf5" Feb 13 15:27:04.072220 containerd[1510]: time="2025-02-13T15:27:04.071438890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:27:04.083315 containerd[1510]: time="2025-02-13T15:27:04.082319117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:27:04.150490 containerd[1510]: time="2025-02-13T15:27:04.150436275Z" level=error msg="Failed to destroy network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.151234 containerd[1510]: time="2025-02-13T15:27:04.151199406Z" level=error msg="encountered an error cleaning up failed sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.151342 containerd[1510]: time="2025-02-13T15:27:04.151276607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.151559 kubelet[2787]: E0213 15:27:04.151500 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.152251 kubelet[2787]: E0213 15:27:04.151560 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:04.152251 kubelet[2787]: E0213 15:27:04.151581 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:04.152251 kubelet[2787]: E0213 15:27:04.151703 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688c5dd9f-wlqt5_calico-apiserver(ea674312-9393-4b86-8bbb-6e3a7a4e2c23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688c5dd9f-wlqt5_calico-apiserver(ea674312-9393-4b86-8bbb-6e3a7a4e2c23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" podUID="ea674312-9393-4b86-8bbb-6e3a7a4e2c23" Feb 13 15:27:04.168459 containerd[1510]: time="2025-02-13T15:27:04.168406038Z" level=error msg="Failed to destroy network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.169370 containerd[1510]: time="2025-02-13T15:27:04.169189728Z" level=error msg="encountered an error cleaning up failed sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.170295 containerd[1510]: time="2025-02-13T15:27:04.170260023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.171470 kubelet[2787]: E0213 15:27:04.171176 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.171470 kubelet[2787]: E0213 15:27:04.171238 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:04.171470 kubelet[2787]: E0213 15:27:04.171344 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:04.171636 kubelet[2787]: E0213 15:27:04.171393 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688c5dd9f-n9z22_calico-apiserver(09d81451-756a-493d-9c79-aa5ef1100e0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688c5dd9f-n9z22_calico-apiserver(09d81451-756a-493d-9c79-aa5ef1100e0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" podUID="09d81451-756a-493d-9c79-aa5ef1100e0d" Feb 13 15:27:04.467139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6-shm.mount: Deactivated successfully. Feb 13 15:27:04.640885 kubelet[2787]: I0213 15:27:04.640839 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087" Feb 13 15:27:04.641966 containerd[1510]: time="2025-02-13T15:27:04.641711340Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\"" Feb 13 15:27:04.641966 containerd[1510]: time="2025-02-13T15:27:04.641884222Z" level=info msg="Ensure that sandbox 52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087 in task-service has been cleanup successfully" Feb 13 15:27:04.642964 containerd[1510]: time="2025-02-13T15:27:04.642667512Z" level=info msg="TearDown network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" successfully" Feb 13 15:27:04.644106 containerd[1510]: time="2025-02-13T15:27:04.643457923Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" returns successfully" Feb 13 15:27:04.645885 systemd[1]: run-netns-cni\x2d88958b22\x2d891f\x2d7ca8\x2d9c4f\x2d94b781ab491a.mount: Deactivated successfully. Feb 13 15:27:04.648092 containerd[1510]: time="2025-02-13T15:27:04.647890423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:27:04.648616 kubelet[2787]: I0213 15:27:04.648590 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35" Feb 13 15:27:04.652066 containerd[1510]: time="2025-02-13T15:27:04.651279189Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\"" Feb 13 15:27:04.652066 containerd[1510]: time="2025-02-13T15:27:04.651945757Z" level=info msg="Ensure that sandbox 12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35 in task-service has been cleanup successfully" Feb 13 15:27:04.653032 containerd[1510]: time="2025-02-13T15:27:04.652601686Z" level=info msg="TearDown network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" successfully" Feb 13 15:27:04.653032 containerd[1510]: time="2025-02-13T15:27:04.652623287Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" returns successfully" Feb 13 15:27:04.655330 systemd[1]: run-netns-cni\x2d3f0dffdc\x2d740a\x2d60df\x2d3c02\x2d9e1f71c225f2.mount: Deactivated successfully. Feb 13 15:27:04.658945 containerd[1510]: time="2025-02-13T15:27:04.657980879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:27:04.661841 kubelet[2787]: I0213 15:27:04.661555 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51" Feb 13 15:27:04.664166 containerd[1510]: time="2025-02-13T15:27:04.662765663Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\"" Feb 13 15:27:04.664166 containerd[1510]: time="2025-02-13T15:27:04.662955826Z" level=info msg="Ensure that sandbox 67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51 in task-service has been cleanup successfully" Feb 13 15:27:04.665788 systemd[1]: run-netns-cni\x2d815fd664\x2db418\x2d3f87\x2d5303\x2d691f1c31dabd.mount: Deactivated successfully. Feb 13 15:27:04.666463 containerd[1510]: time="2025-02-13T15:27:04.665103895Z" level=info msg="TearDown network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" successfully" Feb 13 15:27:04.668419 containerd[1510]: time="2025-02-13T15:27:04.667312845Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" returns successfully" Feb 13 15:27:04.670448 containerd[1510]: time="2025-02-13T15:27:04.670257564Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\"" Feb 13 15:27:04.673163 containerd[1510]: time="2025-02-13T15:27:04.671895506Z" level=info msg="TearDown network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" successfully" Feb 13 15:27:04.673163 containerd[1510]: time="2025-02-13T15:27:04.671947427Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" returns successfully" Feb 13 15:27:04.673314 kubelet[2787]: I0213 15:27:04.672274 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54" Feb 13 15:27:04.674679 containerd[1510]: time="2025-02-13T15:27:04.674622423Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\"" Feb 13 15:27:04.675185 containerd[1510]: time="2025-02-13T15:27:04.674941748Z" level=info msg="Ensure that sandbox 4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54 in task-service has been cleanup successfully" Feb 13 15:27:04.675341 containerd[1510]: time="2025-02-13T15:27:04.675309273Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\"" Feb 13 15:27:04.675559 containerd[1510]: time="2025-02-13T15:27:04.675384154Z" level=info msg="TearDown network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" successfully" Feb 13 15:27:04.675559 containerd[1510]: time="2025-02-13T15:27:04.675399274Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" returns successfully" Feb 13 15:27:04.676537 containerd[1510]: time="2025-02-13T15:27:04.675771519Z" level=info msg="TearDown network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" successfully" Feb 13 15:27:04.676537 containerd[1510]: time="2025-02-13T15:27:04.675796439Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" returns successfully" Feb 13 15:27:04.677453 containerd[1510]: time="2025-02-13T15:27:04.677406861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:3,}" Feb 13 15:27:04.677661 containerd[1510]: time="2025-02-13T15:27:04.677630704Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\"" Feb 13 15:27:04.677732 containerd[1510]: time="2025-02-13T15:27:04.677712425Z" level=info msg="TearDown network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" successfully" Feb 13 15:27:04.677732 containerd[1510]: time="2025-02-13T15:27:04.677727065Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" returns successfully" Feb 13 15:27:04.678925 containerd[1510]: time="2025-02-13T15:27:04.678883961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:2,}" Feb 13 15:27:04.681159 kubelet[2787]: I0213 15:27:04.681129 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6" Feb 13 15:27:04.683799 containerd[1510]: time="2025-02-13T15:27:04.683678345Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\"" Feb 13 15:27:04.684150 containerd[1510]: time="2025-02-13T15:27:04.684117351Z" level=info msg="Ensure that sandbox 12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6 in task-service has been cleanup successfully" Feb 13 15:27:04.684819 containerd[1510]: time="2025-02-13T15:27:04.684709679Z" level=info msg="TearDown network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" successfully" Feb 13 15:27:04.684819 containerd[1510]: time="2025-02-13T15:27:04.684742880Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" returns successfully" Feb 13 15:27:04.688176 containerd[1510]: time="2025-02-13T15:27:04.687730880Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\"" Feb 13 15:27:04.688869 containerd[1510]: time="2025-02-13T15:27:04.688578531Z" level=info msg="TearDown network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" successfully" Feb 13 15:27:04.688869 containerd[1510]: time="2025-02-13T15:27:04.688635332Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" returns successfully" Feb 13 15:27:04.689769 containerd[1510]: time="2025-02-13T15:27:04.689586945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:2,}" Feb 13 15:27:04.691659 kubelet[2787]: I0213 15:27:04.691400 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b" Feb 13 15:27:04.694277 containerd[1510]: time="2025-02-13T15:27:04.694211047Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\"" Feb 13 15:27:04.694565 containerd[1510]: time="2025-02-13T15:27:04.694527052Z" level=info msg="Ensure that sandbox 3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b in task-service has been cleanup successfully" Feb 13 15:27:04.695432 containerd[1510]: time="2025-02-13T15:27:04.695358303Z" level=info msg="TearDown network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" successfully" Feb 13 15:27:04.695506 containerd[1510]: time="2025-02-13T15:27:04.695435744Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" returns successfully" Feb 13 15:27:04.704876 containerd[1510]: time="2025-02-13T15:27:04.704555507Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\"" Feb 13 15:27:04.704876 containerd[1510]: time="2025-02-13T15:27:04.704848591Z" level=info msg="TearDown network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" successfully" Feb 13 15:27:04.704876 containerd[1510]: time="2025-02-13T15:27:04.704863231Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" returns successfully" Feb 13 15:27:04.707834 containerd[1510]: time="2025-02-13T15:27:04.707630028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:2,}" Feb 13 15:27:04.898048 containerd[1510]: time="2025-02-13T15:27:04.897423467Z" level=error msg="Failed to destroy network for sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.898822 containerd[1510]: time="2025-02-13T15:27:04.898383520Z" level=error msg="encountered an error cleaning up failed sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.899719 containerd[1510]: time="2025-02-13T15:27:04.899477735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.900461 kubelet[2787]: E0213 15:27:04.899965 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.900461 kubelet[2787]: E0213 15:27:04.900052 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:04.900461 kubelet[2787]: E0213 15:27:04.900073 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:04.900602 kubelet[2787]: E0213 15:27:04.900118 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:27:04.926868 containerd[1510]: time="2025-02-13T15:27:04.926734863Z" level=error msg="Failed to destroy network for sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.931302 containerd[1510]: time="2025-02-13T15:27:04.931234203Z" level=error msg="encountered an error cleaning up failed sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.934176 containerd[1510]: time="2025-02-13T15:27:04.931332565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.934258 kubelet[2787]: E0213 15:27:04.931548 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.934258 kubelet[2787]: E0213 15:27:04.931636 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:04.934258 kubelet[2787]: E0213 15:27:04.931656 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:04.934363 containerd[1510]: time="2025-02-13T15:27:04.934199923Z" level=error msg="Failed to destroy network for sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.934400 kubelet[2787]: E0213 15:27:04.931709 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8cbdv" podUID="400c597f-5d24-4504-a9fc-e7fa6fcc44df" Feb 13 15:27:04.935093 containerd[1510]: time="2025-02-13T15:27:04.934869252Z" level=error msg="encountered an error cleaning up failed sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.935181 containerd[1510]: time="2025-02-13T15:27:04.935155256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.935515 kubelet[2787]: E0213 15:27:04.935482 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.935669 kubelet[2787]: E0213 15:27:04.935645 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:04.935753 kubelet[2787]: E0213 15:27:04.935726 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:04.937032 kubelet[2787]: E0213 15:27:04.936888 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688c5dd9f-n9z22_calico-apiserver(09d81451-756a-493d-9c79-aa5ef1100e0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688c5dd9f-n9z22_calico-apiserver(09d81451-756a-493d-9c79-aa5ef1100e0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" podUID="09d81451-756a-493d-9c79-aa5ef1100e0d" Feb 13 15:27:04.944125 containerd[1510]: time="2025-02-13T15:27:04.944047456Z" level=error msg="Failed to destroy network for sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.944530 containerd[1510]: time="2025-02-13T15:27:04.944494102Z" level=error msg="encountered an error cleaning up failed sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.944611 containerd[1510]: time="2025-02-13T15:27:04.944562703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.944928 kubelet[2787]: E0213 15:27:04.944824 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.944928 kubelet[2787]: E0213 15:27:04.944885 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:04.945485 kubelet[2787]: E0213 15:27:04.945190 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:04.945485 kubelet[2787]: E0213 15:27:04.945263 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-77zlz" podUID="6bd1757e-903b-41e7-a1bd-8c95ee35dbf5" Feb 13 15:27:04.957228 containerd[1510]: time="2025-02-13T15:27:04.957062631Z" level=error msg="Failed to destroy network for sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.958463 containerd[1510]: time="2025-02-13T15:27:04.958423890Z" level=error msg="encountered an error cleaning up failed sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.958683 containerd[1510]: time="2025-02-13T15:27:04.958626493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.959127 kubelet[2787]: E0213 15:27:04.959065 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.959300 kubelet[2787]: E0213 15:27:04.959124 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:04.959300 kubelet[2787]: E0213 15:27:04.959148 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:04.959300 kubelet[2787]: E0213 15:27:04.959192 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688c5dd9f-wlqt5_calico-apiserver(ea674312-9393-4b86-8bbb-6e3a7a4e2c23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688c5dd9f-wlqt5_calico-apiserver(ea674312-9393-4b86-8bbb-6e3a7a4e2c23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" podUID="ea674312-9393-4b86-8bbb-6e3a7a4e2c23" Feb 13 15:27:04.962830 containerd[1510]: time="2025-02-13T15:27:04.962787829Z" level=error msg="Failed to destroy network for sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.963466 containerd[1510]: time="2025-02-13T15:27:04.963431077Z" level=error msg="encountered an error cleaning up failed sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.963527 containerd[1510]: time="2025-02-13T15:27:04.963503798Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.964239 kubelet[2787]: E0213 15:27:04.964090 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:04.964239 kubelet[2787]: E0213 15:27:04.964167 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:04.964855 kubelet[2787]: E0213 15:27:04.964189 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:04.964855 kubelet[2787]: E0213 15:27:04.964448 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" podUID="b22d464a-68ec-4133-9687-c90c18294db8" Feb 13 15:27:05.460740 systemd[1]: run-netns-cni\x2d30caa814\x2d3ffc\x2d0482\x2d0533\x2db6af224a0ea7.mount: Deactivated successfully. Feb 13 15:27:05.461107 systemd[1]: run-netns-cni\x2d70ff2eb3\x2d7411\x2d9e28\x2d8cb8\x2df60ac72cff6f.mount: Deactivated successfully. Feb 13 15:27:05.461171 systemd[1]: run-netns-cni\x2d4d571e78\x2d6c36\x2d8c08\x2db3fd\x2dbb2733c3cb60.mount: Deactivated successfully. Feb 13 15:27:05.696512 kubelet[2787]: I0213 15:27:05.696473 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896" Feb 13 15:27:05.697943 containerd[1510]: time="2025-02-13T15:27:05.697697076Z" level=info msg="StopPodSandbox for \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\"" Feb 13 15:27:05.699946 containerd[1510]: time="2025-02-13T15:27:05.697896599Z" level=info msg="Ensure that sandbox 913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896 in task-service has been cleanup successfully" Feb 13 15:27:05.702554 systemd[1]: run-netns-cni\x2d2c9d7881\x2d7f58\x2d3930\x2d0810\x2d0f4b4caac01d.mount: Deactivated successfully. Feb 13 15:27:05.703841 containerd[1510]: time="2025-02-13T15:27:05.703062947Z" level=info msg="TearDown network for sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" successfully" Feb 13 15:27:05.703841 containerd[1510]: time="2025-02-13T15:27:05.703103107Z" level=info msg="StopPodSandbox for \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" returns successfully" Feb 13 15:27:05.704330 containerd[1510]: time="2025-02-13T15:27:05.704299923Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\"" Feb 13 15:27:05.704769 containerd[1510]: time="2025-02-13T15:27:05.704713888Z" level=info msg="TearDown network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" successfully" Feb 13 15:27:05.704854 containerd[1510]: time="2025-02-13T15:27:05.704837090Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" returns successfully" Feb 13 15:27:05.705826 containerd[1510]: time="2025-02-13T15:27:05.705746262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:27:05.706210 kubelet[2787]: I0213 15:27:05.706186 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30" Feb 13 15:27:05.707192 containerd[1510]: time="2025-02-13T15:27:05.706650314Z" level=info msg="StopPodSandbox for \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\"" Feb 13 15:27:05.707192 containerd[1510]: time="2025-02-13T15:27:05.706852757Z" level=info msg="Ensure that sandbox 5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30 in task-service has been cleanup successfully" Feb 13 15:27:05.707709 containerd[1510]: time="2025-02-13T15:27:05.707683567Z" level=info msg="TearDown network for sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" successfully" Feb 13 15:27:05.707783 containerd[1510]: time="2025-02-13T15:27:05.707769849Z" level=info msg="StopPodSandbox for \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" returns successfully" Feb 13 15:27:05.709787 systemd[1]: run-netns-cni\x2dce2b6566\x2d1570\x2d5f01\x2d4c6d\x2d22eaefa522af.mount: Deactivated successfully. Feb 13 15:27:05.710989 containerd[1510]: time="2025-02-13T15:27:05.710491524Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\"" Feb 13 15:27:05.710989 containerd[1510]: time="2025-02-13T15:27:05.710610246Z" level=info msg="TearDown network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" successfully" Feb 13 15:27:05.710989 containerd[1510]: time="2025-02-13T15:27:05.710620606Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" returns successfully" Feb 13 15:27:05.711843 containerd[1510]: time="2025-02-13T15:27:05.711809142Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\"" Feb 13 15:27:05.712120 containerd[1510]: time="2025-02-13T15:27:05.712100146Z" level=info msg="TearDown network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" successfully" Feb 13 15:27:05.712210 containerd[1510]: time="2025-02-13T15:27:05.712194387Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" returns successfully" Feb 13 15:27:05.716532 containerd[1510]: time="2025-02-13T15:27:05.716497924Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\"" Feb 13 15:27:05.717773 kubelet[2787]: I0213 15:27:05.717146 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb" Feb 13 15:27:05.718539 containerd[1510]: time="2025-02-13T15:27:05.718171866Z" level=info msg="TearDown network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" successfully" Feb 13 15:27:05.718539 containerd[1510]: time="2025-02-13T15:27:05.718200666Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" returns successfully" Feb 13 15:27:05.718539 containerd[1510]: time="2025-02-13T15:27:05.717878542Z" level=info msg="StopPodSandbox for \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\"" Feb 13 15:27:05.718539 containerd[1510]: time="2025-02-13T15:27:05.718363228Z" level=info msg="Ensure that sandbox e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb in task-service has been cleanup successfully" Feb 13 15:27:05.723262 systemd[1]: run-netns-cni\x2dca992ce4\x2dd4cb\x2d88f4\x2d797d\x2d7c7e3c0e1815.mount: Deactivated successfully. Feb 13 15:27:05.724619 containerd[1510]: time="2025-02-13T15:27:05.724588430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:4,}" Feb 13 15:27:05.725785 containerd[1510]: time="2025-02-13T15:27:05.725751885Z" level=info msg="TearDown network for sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" successfully" Feb 13 15:27:05.726366 containerd[1510]: time="2025-02-13T15:27:05.726331693Z" level=info msg="StopPodSandbox for \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" returns successfully" Feb 13 15:27:05.727559 kubelet[2787]: I0213 15:27:05.727527 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c" Feb 13 15:27:05.728458 containerd[1510]: time="2025-02-13T15:27:05.728202158Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\"" Feb 13 15:27:05.729094 containerd[1510]: time="2025-02-13T15:27:05.728564282Z" level=info msg="TearDown network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" successfully" Feb 13 15:27:05.729094 containerd[1510]: time="2025-02-13T15:27:05.728590363Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" returns successfully" Feb 13 15:27:05.729782 containerd[1510]: time="2025-02-13T15:27:05.729378573Z" level=info msg="StopPodSandbox for \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\"" Feb 13 15:27:05.729782 containerd[1510]: time="2025-02-13T15:27:05.729541015Z" level=info msg="Ensure that sandbox 7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c in task-service has been cleanup successfully" Feb 13 15:27:05.730222 containerd[1510]: time="2025-02-13T15:27:05.730192384Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\"" Feb 13 15:27:05.730571 containerd[1510]: time="2025-02-13T15:27:05.730340626Z" level=info msg="TearDown network for sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" successfully" Feb 13 15:27:05.730947 containerd[1510]: time="2025-02-13T15:27:05.730780072Z" level=info msg="StopPodSandbox for \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" returns successfully" Feb 13 15:27:05.732639 systemd[1]: run-netns-cni\x2d3e43955e\x2d827e\x2d1b93\x2d9fb0\x2da4dbed754e5b.mount: Deactivated successfully. Feb 13 15:27:05.732981 containerd[1510]: time="2025-02-13T15:27:05.732957420Z" level=info msg="TearDown network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" successfully" Feb 13 15:27:05.733257 containerd[1510]: time="2025-02-13T15:27:05.733237224Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" returns successfully" Feb 13 15:27:05.734087 containerd[1510]: time="2025-02-13T15:27:05.734056195Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\"" Feb 13 15:27:05.736718 containerd[1510]: time="2025-02-13T15:27:05.735862578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:3,}" Feb 13 15:27:05.736841 kubelet[2787]: I0213 15:27:05.736464 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c" Feb 13 15:27:05.737403 containerd[1510]: time="2025-02-13T15:27:05.737380518Z" level=info msg="TearDown network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" successfully" Feb 13 15:27:05.737485 containerd[1510]: time="2025-02-13T15:27:05.737471440Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" returns successfully" Feb 13 15:27:05.739331 containerd[1510]: time="2025-02-13T15:27:05.739289064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:27:05.740294 containerd[1510]: time="2025-02-13T15:27:05.740266036Z" level=info msg="StopPodSandbox for \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\"" Feb 13 15:27:05.740524 containerd[1510]: time="2025-02-13T15:27:05.740459079Z" level=info msg="Ensure that sandbox 6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c in task-service has been cleanup successfully" Feb 13 15:27:05.740954 containerd[1510]: time="2025-02-13T15:27:05.740857124Z" level=info msg="TearDown network for sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" successfully" Feb 13 15:27:05.740954 containerd[1510]: time="2025-02-13T15:27:05.740879084Z" level=info msg="StopPodSandbox for \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" returns successfully" Feb 13 15:27:05.742131 containerd[1510]: time="2025-02-13T15:27:05.741839257Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\"" Feb 13 15:27:05.743989 containerd[1510]: time="2025-02-13T15:27:05.743958965Z" level=info msg="TearDown network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" successfully" Feb 13 15:27:05.744180 containerd[1510]: time="2025-02-13T15:27:05.744102287Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" returns successfully" Feb 13 15:27:05.744813 containerd[1510]: time="2025-02-13T15:27:05.744662454Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\"" Feb 13 15:27:05.744813 containerd[1510]: time="2025-02-13T15:27:05.744752455Z" level=info msg="TearDown network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" successfully" Feb 13 15:27:05.744813 containerd[1510]: time="2025-02-13T15:27:05.744762776Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" returns successfully" Feb 13 15:27:05.745946 kubelet[2787]: I0213 15:27:05.745438 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce" Feb 13 15:27:05.747268 containerd[1510]: time="2025-02-13T15:27:05.747074726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:3,}" Feb 13 15:27:05.747461 containerd[1510]: time="2025-02-13T15:27:05.747434051Z" level=info msg="StopPodSandbox for \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\"" Feb 13 15:27:05.747664 containerd[1510]: time="2025-02-13T15:27:05.747644854Z" level=info msg="Ensure that sandbox 58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce in task-service has been cleanup successfully" Feb 13 15:27:05.747896 containerd[1510]: time="2025-02-13T15:27:05.747877977Z" level=info msg="TearDown network for sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" successfully" Feb 13 15:27:05.748040 containerd[1510]: time="2025-02-13T15:27:05.747982018Z" level=info msg="StopPodSandbox for \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" returns successfully" Feb 13 15:27:05.750362 containerd[1510]: time="2025-02-13T15:27:05.749338676Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\"" Feb 13 15:27:05.751803 containerd[1510]: time="2025-02-13T15:27:05.751604306Z" level=info msg="TearDown network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" successfully" Feb 13 15:27:05.751803 containerd[1510]: time="2025-02-13T15:27:05.751635466Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" returns successfully" Feb 13 15:27:05.752213 containerd[1510]: time="2025-02-13T15:27:05.752158153Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\"" Feb 13 15:27:05.752285 containerd[1510]: time="2025-02-13T15:27:05.752261154Z" level=info msg="TearDown network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" successfully" Feb 13 15:27:05.752285 containerd[1510]: time="2025-02-13T15:27:05.752276915Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" returns successfully" Feb 13 15:27:05.753680 containerd[1510]: time="2025-02-13T15:27:05.753123926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:3,}" Feb 13 15:27:05.939417 containerd[1510]: time="2025-02-13T15:27:05.939358858Z" level=error msg="Failed to destroy network for sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:05.939990 containerd[1510]: time="2025-02-13T15:27:05.939958705Z" level=error msg="encountered an error cleaning up failed sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:05.940498 containerd[1510]: time="2025-02-13T15:27:05.940469472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:05.941049 kubelet[2787]: E0213 15:27:05.941012 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:05.941242 kubelet[2787]: E0213 15:27:05.941208 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:05.941314 kubelet[2787]: E0213 15:27:05.941299 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:05.941459 kubelet[2787]: E0213 15:27:05.941420 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8cbdv" podUID="400c597f-5d24-4504-a9fc-e7fa6fcc44df" Feb 13 15:27:05.987761 containerd[1510]: time="2025-02-13T15:27:05.987159527Z" level=error msg="Failed to destroy network for sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:05.990426 containerd[1510]: time="2025-02-13T15:27:05.990366089Z" level=error msg="encountered an error cleaning up failed sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:05.990558 containerd[1510]: time="2025-02-13T15:27:05.990458810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:05.990731 kubelet[2787]: E0213 15:27:05.990681 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:05.990776 kubelet[2787]: E0213 15:27:05.990734 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:05.990776 kubelet[2787]: E0213 15:27:05.990753 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:05.990821 kubelet[2787]: E0213 15:27:05.990789 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688c5dd9f-wlqt5_calico-apiserver(ea674312-9393-4b86-8bbb-6e3a7a4e2c23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688c5dd9f-wlqt5_calico-apiserver(ea674312-9393-4b86-8bbb-6e3a7a4e2c23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" podUID="ea674312-9393-4b86-8bbb-6e3a7a4e2c23" Feb 13 15:27:06.023308 containerd[1510]: time="2025-02-13T15:27:06.022739068Z" level=error msg="Failed to destroy network for sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.023308 containerd[1510]: time="2025-02-13T15:27:06.023111273Z" level=error msg="encountered an error cleaning up failed sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.023308 containerd[1510]: time="2025-02-13T15:27:06.023180234Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.023503 kubelet[2787]: E0213 15:27:06.023383 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.023503 kubelet[2787]: E0213 15:27:06.023432 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:06.023503 kubelet[2787]: E0213 15:27:06.023457 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:06.023586 kubelet[2787]: E0213 15:27:06.023492 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-77zlz" podUID="6bd1757e-903b-41e7-a1bd-8c95ee35dbf5" Feb 13 15:27:06.027409 containerd[1510]: time="2025-02-13T15:27:06.027355768Z" level=error msg="Failed to destroy network for sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.027814 containerd[1510]: time="2025-02-13T15:27:06.027776293Z" level=error msg="encountered an error cleaning up failed sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.027867 containerd[1510]: time="2025-02-13T15:27:06.027854174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.028237 kubelet[2787]: E0213 15:27:06.028107 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.028237 kubelet[2787]: E0213 15:27:06.028164 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:06.028237 kubelet[2787]: E0213 15:27:06.028183 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:06.028990 kubelet[2787]: E0213 15:27:06.028757 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688c5dd9f-n9z22_calico-apiserver(09d81451-756a-493d-9c79-aa5ef1100e0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688c5dd9f-n9z22_calico-apiserver(09d81451-756a-493d-9c79-aa5ef1100e0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" podUID="09d81451-756a-493d-9c79-aa5ef1100e0d" Feb 13 15:27:06.039236 containerd[1510]: time="2025-02-13T15:27:06.039152120Z" level=error msg="Failed to destroy network for sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.040962 containerd[1510]: time="2025-02-13T15:27:06.040902462Z" level=error msg="encountered an error cleaning up failed sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.041193 containerd[1510]: time="2025-02-13T15:27:06.041165265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.041791 kubelet[2787]: E0213 15:27:06.041745 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.042363 kubelet[2787]: E0213 15:27:06.042013 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:06.042363 kubelet[2787]: E0213 15:27:06.042194 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:06.042878 kubelet[2787]: E0213 15:27:06.042713 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:27:06.067975 containerd[1510]: time="2025-02-13T15:27:06.067796448Z" level=error msg="Failed to destroy network for sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.068370 containerd[1510]: time="2025-02-13T15:27:06.068195573Z" level=error msg="encountered an error cleaning up failed sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.068370 containerd[1510]: time="2025-02-13T15:27:06.068261494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.068740 kubelet[2787]: E0213 15:27:06.068532 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.068740 kubelet[2787]: E0213 15:27:06.068583 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:06.068740 kubelet[2787]: E0213 15:27:06.068602 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:06.068842 kubelet[2787]: E0213 15:27:06.068646 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" podUID="b22d464a-68ec-4133-9687-c90c18294db8" Feb 13 15:27:06.460207 systemd[1]: run-netns-cni\x2d673b0a22\x2d2137\x2dc8eb\x2d1ae2\x2def1feef5a1e9.mount: Deactivated successfully. Feb 13 15:27:06.460311 systemd[1]: run-netns-cni\x2d2d616216\x2da309\x2d34b1\x2d19a1\x2d425ad6627ce9.mount: Deactivated successfully. Feb 13 15:27:06.756872 kubelet[2787]: I0213 15:27:06.756738 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d" Feb 13 15:27:06.759150 containerd[1510]: time="2025-02-13T15:27:06.758689410Z" level=info msg="StopPodSandbox for \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\"" Feb 13 15:27:06.759150 containerd[1510]: time="2025-02-13T15:27:06.759074095Z" level=info msg="Ensure that sandbox 6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d in task-service has been cleanup successfully" Feb 13 15:27:06.762542 containerd[1510]: time="2025-02-13T15:27:06.761342885Z" level=info msg="TearDown network for sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\" successfully" Feb 13 15:27:06.762542 containerd[1510]: time="2025-02-13T15:27:06.761475446Z" level=info msg="StopPodSandbox for \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\" returns successfully" Feb 13 15:27:06.764894 systemd[1]: run-netns-cni\x2d60775c3f\x2d086f\x2d1591\x2d8c7f\x2ded0d7fea8dde.mount: Deactivated successfully. Feb 13 15:27:06.767388 containerd[1510]: time="2025-02-13T15:27:06.767352802Z" level=info msg="StopPodSandbox for \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\"" Feb 13 15:27:06.768273 containerd[1510]: time="2025-02-13T15:27:06.768159092Z" level=info msg="TearDown network for sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" successfully" Feb 13 15:27:06.768556 containerd[1510]: time="2025-02-13T15:27:06.768407575Z" level=info msg="StopPodSandbox for \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" returns successfully" Feb 13 15:27:06.769033 containerd[1510]: time="2025-02-13T15:27:06.768987743Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\"" Feb 13 15:27:06.769216 containerd[1510]: time="2025-02-13T15:27:06.769092744Z" level=info msg="TearDown network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" successfully" Feb 13 15:27:06.769216 containerd[1510]: time="2025-02-13T15:27:06.769108144Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" returns successfully" Feb 13 15:27:06.770784 kubelet[2787]: I0213 15:27:06.770710 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e" Feb 13 15:27:06.771096 containerd[1510]: time="2025-02-13T15:27:06.770700525Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\"" Feb 13 15:27:06.771534 containerd[1510]: time="2025-02-13T15:27:06.771296613Z" level=info msg="TearDown network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" successfully" Feb 13 15:27:06.771534 containerd[1510]: time="2025-02-13T15:27:06.771324573Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" returns successfully" Feb 13 15:27:06.774193 containerd[1510]: time="2025-02-13T15:27:06.774146129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:4,}" Feb 13 15:27:06.774795 containerd[1510]: time="2025-02-13T15:27:06.774618735Z" level=info msg="StopPodSandbox for \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\"" Feb 13 15:27:06.774795 containerd[1510]: time="2025-02-13T15:27:06.774785377Z" level=info msg="Ensure that sandbox a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e in task-service has been cleanup successfully" Feb 13 15:27:06.777316 containerd[1510]: time="2025-02-13T15:27:06.777076687Z" level=info msg="TearDown network for sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\" successfully" Feb 13 15:27:06.777316 containerd[1510]: time="2025-02-13T15:27:06.777109967Z" level=info msg="StopPodSandbox for \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\" returns successfully" Feb 13 15:27:06.778876 containerd[1510]: time="2025-02-13T15:27:06.778518705Z" level=info msg="StopPodSandbox for \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\"" Feb 13 15:27:06.779750 containerd[1510]: time="2025-02-13T15:27:06.779265675Z" level=info msg="TearDown network for sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" successfully" Feb 13 15:27:06.780453 systemd[1]: run-netns-cni\x2dab7bd124\x2db215\x2d0cf0\x2d7a0d\x2d5fdf686631d9.mount: Deactivated successfully. Feb 13 15:27:06.781614 containerd[1510]: time="2025-02-13T15:27:06.781095379Z" level=info msg="StopPodSandbox for \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" returns successfully" Feb 13 15:27:06.782636 containerd[1510]: time="2025-02-13T15:27:06.782551557Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\"" Feb 13 15:27:06.782751 containerd[1510]: time="2025-02-13T15:27:06.782673399Z" level=info msg="TearDown network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" successfully" Feb 13 15:27:06.782751 containerd[1510]: time="2025-02-13T15:27:06.782685119Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" returns successfully" Feb 13 15:27:06.785234 containerd[1510]: time="2025-02-13T15:27:06.784874867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:27:06.785783 kubelet[2787]: I0213 15:27:06.785705 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f" Feb 13 15:27:06.786848 containerd[1510]: time="2025-02-13T15:27:06.786795852Z" level=info msg="StopPodSandbox for \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\"" Feb 13 15:27:06.787287 containerd[1510]: time="2025-02-13T15:27:06.787264818Z" level=info msg="Ensure that sandbox c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f in task-service has been cleanup successfully" Feb 13 15:27:06.790223 containerd[1510]: time="2025-02-13T15:27:06.790103134Z" level=info msg="TearDown network for sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\" successfully" Feb 13 15:27:06.790223 containerd[1510]: time="2025-02-13T15:27:06.790164495Z" level=info msg="StopPodSandbox for \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\" returns successfully" Feb 13 15:27:06.790525 systemd[1]: run-netns-cni\x2d334b7534\x2da23c\x2d7a7d\x2d9fdf\x2d4caae5cb6207.mount: Deactivated successfully. Feb 13 15:27:06.793282 containerd[1510]: time="2025-02-13T15:27:06.793207854Z" level=info msg="StopPodSandbox for \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\"" Feb 13 15:27:06.794626 containerd[1510]: time="2025-02-13T15:27:06.794259148Z" level=info msg="TearDown network for sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" successfully" Feb 13 15:27:06.794725 containerd[1510]: time="2025-02-13T15:27:06.794707034Z" level=info msg="StopPodSandbox for \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" returns successfully" Feb 13 15:27:06.796842 containerd[1510]: time="2025-02-13T15:27:06.796805261Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\"" Feb 13 15:27:06.797656 containerd[1510]: time="2025-02-13T15:27:06.797613191Z" level=info msg="TearDown network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" successfully" Feb 13 15:27:06.797656 containerd[1510]: time="2025-02-13T15:27:06.797638791Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" returns successfully" Feb 13 15:27:06.799760 containerd[1510]: time="2025-02-13T15:27:06.799723058Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\"" Feb 13 15:27:06.799760 containerd[1510]: time="2025-02-13T15:27:06.799852100Z" level=info msg="TearDown network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" successfully" Feb 13 15:27:06.799760 containerd[1510]: time="2025-02-13T15:27:06.799864220Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" returns successfully" Feb 13 15:27:06.802757 kubelet[2787]: I0213 15:27:06.802701 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b" Feb 13 15:27:06.809127 containerd[1510]: time="2025-02-13T15:27:06.808776494Z" level=info msg="StopPodSandbox for \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\"" Feb 13 15:27:06.815053 containerd[1510]: time="2025-02-13T15:27:06.808766014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:4,}" Feb 13 15:27:06.815053 containerd[1510]: time="2025-02-13T15:27:06.809766507Z" level=info msg="Ensure that sandbox b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b in task-service has been cleanup successfully" Feb 13 15:27:06.815053 containerd[1510]: time="2025-02-13T15:27:06.813359233Z" level=info msg="TearDown network for sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\" successfully" Feb 13 15:27:06.815053 containerd[1510]: time="2025-02-13T15:27:06.813395874Z" level=info msg="StopPodSandbox for \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\" returns successfully" Feb 13 15:27:06.815053 containerd[1510]: time="2025-02-13T15:27:06.814537088Z" level=info msg="StopPodSandbox for \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\"" Feb 13 15:27:06.812856 systemd[1]: run-netns-cni\x2dd1855ca7\x2d642f\x2d863d\x2d6a22\x2dd4fed6da09bc.mount: Deactivated successfully. Feb 13 15:27:06.816935 containerd[1510]: time="2025-02-13T15:27:06.816739077Z" level=info msg="TearDown network for sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" successfully" Feb 13 15:27:06.817243 containerd[1510]: time="2025-02-13T15:27:06.817220763Z" level=info msg="StopPodSandbox for \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" returns successfully" Feb 13 15:27:06.818868 containerd[1510]: time="2025-02-13T15:27:06.818830504Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\"" Feb 13 15:27:06.819101 containerd[1510]: time="2025-02-13T15:27:06.819079187Z" level=info msg="TearDown network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" successfully" Feb 13 15:27:06.819101 containerd[1510]: time="2025-02-13T15:27:06.819096507Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" returns successfully" Feb 13 15:27:06.823173 containerd[1510]: time="2025-02-13T15:27:06.823132479Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\"" Feb 13 15:27:06.823269 containerd[1510]: time="2025-02-13T15:27:06.823233720Z" level=info msg="TearDown network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" successfully" Feb 13 15:27:06.823269 containerd[1510]: time="2025-02-13T15:27:06.823245360Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" returns successfully" Feb 13 15:27:06.824291 containerd[1510]: time="2025-02-13T15:27:06.824253493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:4,}" Feb 13 15:27:06.824704 kubelet[2787]: I0213 15:27:06.824607 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba" Feb 13 15:27:06.827704 containerd[1510]: time="2025-02-13T15:27:06.827503175Z" level=info msg="StopPodSandbox for \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\"" Feb 13 15:27:06.829525 containerd[1510]: time="2025-02-13T15:27:06.829485401Z" level=info msg="Ensure that sandbox 7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba in task-service has been cleanup successfully" Feb 13 15:27:06.831771 containerd[1510]: time="2025-02-13T15:27:06.831641468Z" level=info msg="TearDown network for sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\" successfully" Feb 13 15:27:06.831771 containerd[1510]: time="2025-02-13T15:27:06.831670549Z" level=info msg="StopPodSandbox for \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\" returns successfully" Feb 13 15:27:06.833164 containerd[1510]: time="2025-02-13T15:27:06.832931605Z" level=info msg="StopPodSandbox for \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\"" Feb 13 15:27:06.834349 containerd[1510]: time="2025-02-13T15:27:06.834204621Z" level=info msg="TearDown network for sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" successfully" Feb 13 15:27:06.834349 containerd[1510]: time="2025-02-13T15:27:06.834236222Z" level=info msg="StopPodSandbox for \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" returns successfully" Feb 13 15:27:06.835929 containerd[1510]: time="2025-02-13T15:27:06.835060832Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\"" Feb 13 15:27:06.835929 containerd[1510]: time="2025-02-13T15:27:06.835154714Z" level=info msg="TearDown network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" successfully" Feb 13 15:27:06.835929 containerd[1510]: time="2025-02-13T15:27:06.835164234Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" returns successfully" Feb 13 15:27:06.836054 kubelet[2787]: I0213 15:27:06.835404 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542" Feb 13 15:27:06.838878 containerd[1510]: time="2025-02-13T15:27:06.838829721Z" level=info msg="StopPodSandbox for \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\"" Feb 13 15:27:06.839280 containerd[1510]: time="2025-02-13T15:27:06.839230406Z" level=info msg="Ensure that sandbox 5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542 in task-service has been cleanup successfully" Feb 13 15:27:06.839582 containerd[1510]: time="2025-02-13T15:27:06.839521090Z" level=info msg="TearDown network for sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\" successfully" Feb 13 15:27:06.839582 containerd[1510]: time="2025-02-13T15:27:06.839576330Z" level=info msg="StopPodSandbox for \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\" returns successfully" Feb 13 15:27:06.842138 containerd[1510]: time="2025-02-13T15:27:06.840334340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:27:06.842138 containerd[1510]: time="2025-02-13T15:27:06.841714758Z" level=info msg="StopPodSandbox for \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\"" Feb 13 15:27:06.842138 containerd[1510]: time="2025-02-13T15:27:06.841803799Z" level=info msg="TearDown network for sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" successfully" Feb 13 15:27:06.842138 containerd[1510]: time="2025-02-13T15:27:06.841812959Z" level=info msg="StopPodSandbox for \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" returns successfully" Feb 13 15:27:06.845761 containerd[1510]: time="2025-02-13T15:27:06.845707289Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\"" Feb 13 15:27:06.845890 containerd[1510]: time="2025-02-13T15:27:06.845828971Z" level=info msg="TearDown network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" successfully" Feb 13 15:27:06.845890 containerd[1510]: time="2025-02-13T15:27:06.845841051Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" returns successfully" Feb 13 15:27:06.847378 containerd[1510]: time="2025-02-13T15:27:06.847323030Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\"" Feb 13 15:27:06.848043 containerd[1510]: time="2025-02-13T15:27:06.847552073Z" level=info msg="TearDown network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" successfully" Feb 13 15:27:06.848220 containerd[1510]: time="2025-02-13T15:27:06.848160721Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" returns successfully" Feb 13 15:27:06.850427 containerd[1510]: time="2025-02-13T15:27:06.849585299Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\"" Feb 13 15:27:06.850427 containerd[1510]: time="2025-02-13T15:27:06.849733421Z" level=info msg="TearDown network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" successfully" Feb 13 15:27:06.850427 containerd[1510]: time="2025-02-13T15:27:06.849747101Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" returns successfully" Feb 13 15:27:06.851143 containerd[1510]: time="2025-02-13T15:27:06.851064438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:5,}" Feb 13 15:27:06.995177 containerd[1510]: time="2025-02-13T15:27:06.995109130Z" level=error msg="Failed to destroy network for sandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.995846 containerd[1510]: time="2025-02-13T15:27:06.995807379Z" level=error msg="encountered an error cleaning up failed sandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.995991 containerd[1510]: time="2025-02-13T15:27:06.995962581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.996759 kubelet[2787]: E0213 15:27:06.996393 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:06.996759 kubelet[2787]: E0213 15:27:06.996454 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:06.996759 kubelet[2787]: E0213 15:27:06.996475 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:06.996886 kubelet[2787]: E0213 15:27:06.996521 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8cbdv" podUID="400c597f-5d24-4504-a9fc-e7fa6fcc44df" Feb 13 15:27:07.023437 containerd[1510]: time="2025-02-13T15:27:07.023320446Z" level=error msg="Failed to destroy network for sandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.024321 containerd[1510]: time="2025-02-13T15:27:07.024063615Z" level=error msg="encountered an error cleaning up failed sandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.024321 containerd[1510]: time="2025-02-13T15:27:07.024131376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.025185 kubelet[2787]: E0213 15:27:07.024978 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.025185 kubelet[2787]: E0213 15:27:07.025133 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:07.025892 kubelet[2787]: E0213 15:27:07.025475 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:07.026136 kubelet[2787]: E0213 15:27:07.026073 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688c5dd9f-wlqt5_calico-apiserver(ea674312-9393-4b86-8bbb-6e3a7a4e2c23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688c5dd9f-wlqt5_calico-apiserver(ea674312-9393-4b86-8bbb-6e3a7a4e2c23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" podUID="ea674312-9393-4b86-8bbb-6e3a7a4e2c23" Feb 13 15:27:07.107877 containerd[1510]: time="2025-02-13T15:27:07.107748826Z" level=error msg="Failed to destroy network for sandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.110625 containerd[1510]: time="2025-02-13T15:27:07.110539701Z" level=error msg="encountered an error cleaning up failed sandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.112478 containerd[1510]: time="2025-02-13T15:27:07.112317084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.113101 kubelet[2787]: E0213 15:27:07.112719 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.113101 kubelet[2787]: E0213 15:27:07.112777 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:07.113101 kubelet[2787]: E0213 15:27:07.112794 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:07.113239 kubelet[2787]: E0213 15:27:07.112829 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:27:07.117069 containerd[1510]: time="2025-02-13T15:27:07.116845821Z" level=error msg="Failed to destroy network for sandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.119795 containerd[1510]: time="2025-02-13T15:27:07.119209010Z" level=error msg="encountered an error cleaning up failed sandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.119795 containerd[1510]: time="2025-02-13T15:27:07.119284371Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.119967 kubelet[2787]: E0213 15:27:07.119470 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.119967 kubelet[2787]: E0213 15:27:07.119536 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:07.119967 kubelet[2787]: E0213 15:27:07.119557 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:07.120109 kubelet[2787]: E0213 15:27:07.119592 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" podUID="b22d464a-68ec-4133-9687-c90c18294db8" Feb 13 15:27:07.128539 containerd[1510]: time="2025-02-13T15:27:07.128318325Z" level=error msg="Failed to destroy network for sandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.129280 containerd[1510]: time="2025-02-13T15:27:07.129183736Z" level=error msg="encountered an error cleaning up failed sandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.129280 containerd[1510]: time="2025-02-13T15:27:07.129264657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.130660 kubelet[2787]: E0213 15:27:07.129501 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.130660 kubelet[2787]: E0213 15:27:07.129554 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:07.130660 kubelet[2787]: E0213 15:27:07.129577 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:07.130807 kubelet[2787]: E0213 15:27:07.129614 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-77zlz" podUID="6bd1757e-903b-41e7-a1bd-8c95ee35dbf5" Feb 13 15:27:07.133559 containerd[1510]: time="2025-02-13T15:27:07.133451989Z" level=error msg="Failed to destroy network for sandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.134610 containerd[1510]: time="2025-02-13T15:27:07.134531483Z" level=error msg="encountered an error cleaning up failed sandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.134687 containerd[1510]: time="2025-02-13T15:27:07.134610044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.135167 kubelet[2787]: E0213 15:27:07.134808 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:07.135167 kubelet[2787]: E0213 15:27:07.134858 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:07.135167 kubelet[2787]: E0213 15:27:07.134882 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:07.135246 kubelet[2787]: E0213 15:27:07.134949 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688c5dd9f-n9z22_calico-apiserver(09d81451-756a-493d-9c79-aa5ef1100e0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688c5dd9f-n9z22_calico-apiserver(09d81451-756a-493d-9c79-aa5ef1100e0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" podUID="09d81451-756a-493d-9c79-aa5ef1100e0d" Feb 13 15:27:07.459518 systemd[1]: run-netns-cni\x2db260678d\x2d7dfd\x2dffa1\x2d2fac\x2d868019aa8e07.mount: Deactivated successfully. Feb 13 15:27:07.459784 systemd[1]: run-netns-cni\x2d65a46b3a\x2d72ce\x2d747e\x2dc509\x2dd1bc17b76129.mount: Deactivated successfully. Feb 13 15:27:07.842451 kubelet[2787]: I0213 15:27:07.842182 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343" Feb 13 15:27:07.844300 containerd[1510]: time="2025-02-13T15:27:07.844257476Z" level=info msg="StopPodSandbox for \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\"" Feb 13 15:27:07.845298 containerd[1510]: time="2025-02-13T15:27:07.844984925Z" level=info msg="Ensure that sandbox ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343 in task-service has been cleanup successfully" Feb 13 15:27:07.847190 systemd[1]: run-netns-cni\x2d82285cb5\x2d1c61\x2d87a9\x2d9926\x2da2d3df6851a0.mount: Deactivated successfully. Feb 13 15:27:07.849480 containerd[1510]: time="2025-02-13T15:27:07.849441621Z" level=info msg="TearDown network for sandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\" successfully" Feb 13 15:27:07.849480 containerd[1510]: time="2025-02-13T15:27:07.849475021Z" level=info msg="StopPodSandbox for \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\" returns successfully" Feb 13 15:27:07.850546 containerd[1510]: time="2025-02-13T15:27:07.850480674Z" level=info msg="StopPodSandbox for \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\"" Feb 13 15:27:07.851324 containerd[1510]: time="2025-02-13T15:27:07.850872319Z" level=info msg="TearDown network for sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\" successfully" Feb 13 15:27:07.851324 containerd[1510]: time="2025-02-13T15:27:07.851269644Z" level=info msg="StopPodSandbox for \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\" returns successfully" Feb 13 15:27:07.851838 containerd[1510]: time="2025-02-13T15:27:07.851641408Z" level=info msg="StopPodSandbox for \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\"" Feb 13 15:27:07.851838 containerd[1510]: time="2025-02-13T15:27:07.851717529Z" level=info msg="TearDown network for sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" successfully" Feb 13 15:27:07.851838 containerd[1510]: time="2025-02-13T15:27:07.851727089Z" level=info msg="StopPodSandbox for \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" returns successfully" Feb 13 15:27:07.852991 containerd[1510]: time="2025-02-13T15:27:07.852938585Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\"" Feb 13 15:27:07.853426 containerd[1510]: time="2025-02-13T15:27:07.853328949Z" level=info msg="TearDown network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" successfully" Feb 13 15:27:07.853426 containerd[1510]: time="2025-02-13T15:27:07.853347190Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" returns successfully" Feb 13 15:27:07.854381 containerd[1510]: time="2025-02-13T15:27:07.854260681Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\"" Feb 13 15:27:07.854381 containerd[1510]: time="2025-02-13T15:27:07.854353002Z" level=info msg="TearDown network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" successfully" Feb 13 15:27:07.854381 containerd[1510]: time="2025-02-13T15:27:07.854362722Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" returns successfully" Feb 13 15:27:07.855084 kubelet[2787]: I0213 15:27:07.854998 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37" Feb 13 15:27:07.856360 containerd[1510]: time="2025-02-13T15:27:07.856094344Z" level=info msg="StopPodSandbox for \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\"" Feb 13 15:27:07.857888 containerd[1510]: time="2025-02-13T15:27:07.857829686Z" level=info msg="Ensure that sandbox 797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37 in task-service has been cleanup successfully" Feb 13 15:27:07.860111 containerd[1510]: time="2025-02-13T15:27:07.860025834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:5,}" Feb 13 15:27:07.862124 containerd[1510]: time="2025-02-13T15:27:07.860249996Z" level=info msg="TearDown network for sandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\" successfully" Feb 13 15:27:07.862124 containerd[1510]: time="2025-02-13T15:27:07.860277557Z" level=info msg="StopPodSandbox for \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\" returns successfully" Feb 13 15:27:07.861450 systemd[1]: run-netns-cni\x2da9e48bfa\x2db68a\x2dc3c4\x2dbe85\x2d6d0246821da2.mount: Deactivated successfully. Feb 13 15:27:07.862899 containerd[1510]: time="2025-02-13T15:27:07.862749628Z" level=info msg="StopPodSandbox for \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\"" Feb 13 15:27:07.862899 containerd[1510]: time="2025-02-13T15:27:07.862858789Z" level=info msg="TearDown network for sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\" successfully" Feb 13 15:27:07.862899 containerd[1510]: time="2025-02-13T15:27:07.862885469Z" level=info msg="StopPodSandbox for \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\" returns successfully" Feb 13 15:27:07.865680 containerd[1510]: time="2025-02-13T15:27:07.865640664Z" level=info msg="StopPodSandbox for \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\"" Feb 13 15:27:07.866247 containerd[1510]: time="2025-02-13T15:27:07.866209911Z" level=info msg="TearDown network for sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" successfully" Feb 13 15:27:07.867024 containerd[1510]: time="2025-02-13T15:27:07.866487795Z" level=info msg="StopPodSandbox for \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" returns successfully" Feb 13 15:27:07.869358 containerd[1510]: time="2025-02-13T15:27:07.869298550Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\"" Feb 13 15:27:07.871064 containerd[1510]: time="2025-02-13T15:27:07.870727208Z" level=info msg="TearDown network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" successfully" Feb 13 15:27:07.871064 containerd[1510]: time="2025-02-13T15:27:07.870763248Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" returns successfully" Feb 13 15:27:07.873386 containerd[1510]: time="2025-02-13T15:27:07.872981876Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\"" Feb 13 15:27:07.873957 containerd[1510]: time="2025-02-13T15:27:07.873516803Z" level=info msg="TearDown network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" successfully" Feb 13 15:27:07.873957 containerd[1510]: time="2025-02-13T15:27:07.873532123Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" returns successfully" Feb 13 15:27:07.875323 containerd[1510]: time="2025-02-13T15:27:07.875286185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:5,}" Feb 13 15:27:07.875965 kubelet[2787]: I0213 15:27:07.875940 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9" Feb 13 15:27:07.878479 containerd[1510]: time="2025-02-13T15:27:07.878326943Z" level=info msg="StopPodSandbox for \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\"" Feb 13 15:27:07.878593 containerd[1510]: time="2025-02-13T15:27:07.878508186Z" level=info msg="Ensure that sandbox 9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9 in task-service has been cleanup successfully" Feb 13 15:27:07.878915 containerd[1510]: time="2025-02-13T15:27:07.878724708Z" level=info msg="TearDown network for sandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\" successfully" Feb 13 15:27:07.878915 containerd[1510]: time="2025-02-13T15:27:07.878742989Z" level=info msg="StopPodSandbox for \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\" returns successfully" Feb 13 15:27:07.883324 containerd[1510]: time="2025-02-13T15:27:07.882796200Z" level=info msg="StopPodSandbox for \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\"" Feb 13 15:27:07.884173 systemd[1]: run-netns-cni\x2dc3ffd976\x2d62d6\x2d273a\x2da55b\x2da843d7ed7fd0.mount: Deactivated successfully. Feb 13 15:27:07.889534 containerd[1510]: time="2025-02-13T15:27:07.889418283Z" level=info msg="TearDown network for sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\" successfully" Feb 13 15:27:07.890451 containerd[1510]: time="2025-02-13T15:27:07.889762887Z" level=info msg="StopPodSandbox for \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\" returns successfully" Feb 13 15:27:07.894179 containerd[1510]: time="2025-02-13T15:27:07.894139702Z" level=info msg="StopPodSandbox for \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\"" Feb 13 15:27:07.894277 containerd[1510]: time="2025-02-13T15:27:07.894246863Z" level=info msg="TearDown network for sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" successfully" Feb 13 15:27:07.894277 containerd[1510]: time="2025-02-13T15:27:07.894257503Z" level=info msg="StopPodSandbox for \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" returns successfully" Feb 13 15:27:07.896922 containerd[1510]: time="2025-02-13T15:27:07.896637413Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\"" Feb 13 15:27:07.896922 containerd[1510]: time="2025-02-13T15:27:07.896748255Z" level=info msg="TearDown network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" successfully" Feb 13 15:27:07.896922 containerd[1510]: time="2025-02-13T15:27:07.896759015Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" returns successfully" Feb 13 15:27:07.897109 kubelet[2787]: I0213 15:27:07.897090 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab" Feb 13 15:27:07.898391 containerd[1510]: time="2025-02-13T15:27:07.898342435Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\"" Feb 13 15:27:07.900726 containerd[1510]: time="2025-02-13T15:27:07.900690784Z" level=info msg="TearDown network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" successfully" Feb 13 15:27:07.903310 containerd[1510]: time="2025-02-13T15:27:07.903276937Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" returns successfully" Feb 13 15:27:07.906593 containerd[1510]: time="2025-02-13T15:27:07.906432296Z" level=info msg="StopPodSandbox for \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\"" Feb 13 15:27:07.907347 containerd[1510]: time="2025-02-13T15:27:07.907133465Z" level=info msg="Ensure that sandbox d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab in task-service has been cleanup successfully" Feb 13 15:27:07.908250 containerd[1510]: time="2025-02-13T15:27:07.908195599Z" level=info msg="TearDown network for sandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\" successfully" Feb 13 15:27:07.908250 containerd[1510]: time="2025-02-13T15:27:07.908216639Z" level=info msg="StopPodSandbox for \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\" returns successfully" Feb 13 15:27:07.911744 containerd[1510]: time="2025-02-13T15:27:07.911707603Z" level=info msg="StopPodSandbox for \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\"" Feb 13 15:27:07.912353 containerd[1510]: time="2025-02-13T15:27:07.912083647Z" level=info msg="TearDown network for sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\" successfully" Feb 13 15:27:07.912353 containerd[1510]: time="2025-02-13T15:27:07.912115648Z" level=info msg="StopPodSandbox for \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\" returns successfully" Feb 13 15:27:07.912353 containerd[1510]: time="2025-02-13T15:27:07.912216089Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\"" Feb 13 15:27:07.912902 containerd[1510]: time="2025-02-13T15:27:07.912557173Z" level=info msg="TearDown network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" successfully" Feb 13 15:27:07.912902 containerd[1510]: time="2025-02-13T15:27:07.912603974Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" returns successfully" Feb 13 15:27:07.917610 containerd[1510]: time="2025-02-13T15:27:07.913363263Z" level=info msg="StopPodSandbox for \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\"" Feb 13 15:27:07.917610 containerd[1510]: time="2025-02-13T15:27:07.913403544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:6,}" Feb 13 15:27:07.917610 containerd[1510]: time="2025-02-13T15:27:07.913437424Z" level=info msg="TearDown network for sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" successfully" Feb 13 15:27:07.917610 containerd[1510]: time="2025-02-13T15:27:07.913446744Z" level=info msg="StopPodSandbox for \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" returns successfully" Feb 13 15:27:07.917610 containerd[1510]: time="2025-02-13T15:27:07.914130433Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\"" Feb 13 15:27:07.917610 containerd[1510]: time="2025-02-13T15:27:07.914403516Z" level=info msg="TearDown network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" successfully" Feb 13 15:27:07.917610 containerd[1510]: time="2025-02-13T15:27:07.914416757Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" returns successfully" Feb 13 15:27:07.917610 containerd[1510]: time="2025-02-13T15:27:07.915064845Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\"" Feb 13 15:27:07.917610 containerd[1510]: time="2025-02-13T15:27:07.915140366Z" level=info msg="TearDown network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" successfully" Feb 13 15:27:07.917610 containerd[1510]: time="2025-02-13T15:27:07.915149566Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" returns successfully" Feb 13 15:27:07.917913 kubelet[2787]: I0213 15:27:07.915464 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38" Feb 13 15:27:07.922820 containerd[1510]: time="2025-02-13T15:27:07.922124893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:5,}" Feb 13 15:27:07.922820 containerd[1510]: time="2025-02-13T15:27:07.922376497Z" level=info msg="StopPodSandbox for \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\"" Feb 13 15:27:07.922820 containerd[1510]: time="2025-02-13T15:27:07.922522458Z" level=info msg="Ensure that sandbox adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38 in task-service has been cleanup successfully" Feb 13 15:27:07.927065 containerd[1510]: time="2025-02-13T15:27:07.927021275Z" level=info msg="TearDown network for sandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\" successfully" Feb 13 15:27:07.927065 containerd[1510]: time="2025-02-13T15:27:07.927052075Z" level=info msg="StopPodSandbox for \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\" returns successfully" Feb 13 15:27:07.927856 containerd[1510]: time="2025-02-13T15:27:07.927830005Z" level=info msg="StopPodSandbox for \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\"" Feb 13 15:27:07.928794 containerd[1510]: time="2025-02-13T15:27:07.928674496Z" level=info msg="TearDown network for sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\" successfully" Feb 13 15:27:07.928952 containerd[1510]: time="2025-02-13T15:27:07.928925499Z" level=info msg="StopPodSandbox for \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\" returns successfully" Feb 13 15:27:07.930668 containerd[1510]: time="2025-02-13T15:27:07.930456918Z" level=info msg="StopPodSandbox for \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\"" Feb 13 15:27:07.930785 kubelet[2787]: I0213 15:27:07.930760 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519" Feb 13 15:27:07.931215 containerd[1510]: time="2025-02-13T15:27:07.931137727Z" level=info msg="TearDown network for sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" successfully" Feb 13 15:27:07.931215 containerd[1510]: time="2025-02-13T15:27:07.931163847Z" level=info msg="StopPodSandbox for \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" returns successfully" Feb 13 15:27:07.932873 containerd[1510]: time="2025-02-13T15:27:07.932839388Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\"" Feb 13 15:27:07.933048 containerd[1510]: time="2025-02-13T15:27:07.932942229Z" level=info msg="TearDown network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" successfully" Feb 13 15:27:07.933048 containerd[1510]: time="2025-02-13T15:27:07.932952949Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" returns successfully" Feb 13 15:27:07.933391 containerd[1510]: time="2025-02-13T15:27:07.933299514Z" level=info msg="StopPodSandbox for \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\"" Feb 13 15:27:07.933536 containerd[1510]: time="2025-02-13T15:27:07.933447356Z" level=info msg="Ensure that sandbox 1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519 in task-service has been cleanup successfully" Feb 13 15:27:07.933944 containerd[1510]: time="2025-02-13T15:27:07.933896401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:27:07.934263 containerd[1510]: time="2025-02-13T15:27:07.934227285Z" level=info msg="TearDown network for sandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\" successfully" Feb 13 15:27:07.934263 containerd[1510]: time="2025-02-13T15:27:07.934247766Z" level=info msg="StopPodSandbox for \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\" returns successfully" Feb 13 15:27:07.937667 containerd[1510]: time="2025-02-13T15:27:07.937601728Z" level=info msg="StopPodSandbox for \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\"" Feb 13 15:27:07.937873 containerd[1510]: time="2025-02-13T15:27:07.937769930Z" level=info msg="TearDown network for sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\" successfully" Feb 13 15:27:07.937873 containerd[1510]: time="2025-02-13T15:27:07.937782370Z" level=info msg="StopPodSandbox for \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\" returns successfully" Feb 13 15:27:07.938450 containerd[1510]: time="2025-02-13T15:27:07.938425938Z" level=info msg="StopPodSandbox for \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\"" Feb 13 15:27:07.938648 containerd[1510]: time="2025-02-13T15:27:07.938629301Z" level=info msg="TearDown network for sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" successfully" Feb 13 15:27:07.938712 containerd[1510]: time="2025-02-13T15:27:07.938698782Z" level=info msg="StopPodSandbox for \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" returns successfully" Feb 13 15:27:07.940001 containerd[1510]: time="2025-02-13T15:27:07.939681594Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\"" Feb 13 15:27:07.940184 containerd[1510]: time="2025-02-13T15:27:07.940165680Z" level=info msg="TearDown network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" successfully" Feb 13 15:27:07.940245 containerd[1510]: time="2025-02-13T15:27:07.940232441Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" returns successfully" Feb 13 15:27:07.941357 containerd[1510]: time="2025-02-13T15:27:07.941295894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:27:08.039188 containerd[1510]: time="2025-02-13T15:27:08.039115792Z" level=error msg="Failed to destroy network for sandbox \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.041604 containerd[1510]: time="2025-02-13T15:27:08.041104376Z" level=error msg="encountered an error cleaning up failed sandbox \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.042320 containerd[1510]: time="2025-02-13T15:27:08.042229350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.042876 kubelet[2787]: E0213 15:27:08.042798 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.042999 kubelet[2787]: E0213 15:27:08.042894 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:08.042999 kubelet[2787]: E0213 15:27:08.042926 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" Feb 13 15:27:08.042999 kubelet[2787]: E0213 15:27:08.042968 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-788df79c7b-m4kpj_calico-system(b22d464a-68ec-4133-9687-c90c18294db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" podUID="b22d464a-68ec-4133-9687-c90c18294db8" Feb 13 15:27:08.145625 containerd[1510]: time="2025-02-13T15:27:08.145468336Z" level=error msg="Failed to destroy network for sandbox \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.151975 containerd[1510]: time="2025-02-13T15:27:08.151762814Z" level=error msg="encountered an error cleaning up failed sandbox \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.151975 containerd[1510]: time="2025-02-13T15:27:08.151874575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.153752 kubelet[2787]: E0213 15:27:08.153698 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.153961 kubelet[2787]: E0213 15:27:08.153765 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:08.153961 kubelet[2787]: E0213 15:27:08.153788 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-77zlz" Feb 13 15:27:08.153961 kubelet[2787]: E0213 15:27:08.153834 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-77zlz_kube-system(6bd1757e-903b-41e7-a1bd-8c95ee35dbf5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-77zlz" podUID="6bd1757e-903b-41e7-a1bd-8c95ee35dbf5" Feb 13 15:27:08.183006 containerd[1510]: time="2025-02-13T15:27:08.182090266Z" level=error msg="Failed to destroy network for sandbox \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.183006 containerd[1510]: time="2025-02-13T15:27:08.182704233Z" level=error msg="encountered an error cleaning up failed sandbox \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.183006 containerd[1510]: time="2025-02-13T15:27:08.182772794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.183521 kubelet[2787]: E0213 15:27:08.183481 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.183611 kubelet[2787]: E0213 15:27:08.183539 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:08.183611 kubelet[2787]: E0213 15:27:08.183564 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" Feb 13 15:27:08.183660 kubelet[2787]: E0213 15:27:08.183601 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688c5dd9f-n9z22_calico-apiserver(09d81451-756a-493d-9c79-aa5ef1100e0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688c5dd9f-n9z22_calico-apiserver(09d81451-756a-493d-9c79-aa5ef1100e0d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" podUID="09d81451-756a-493d-9c79-aa5ef1100e0d" Feb 13 15:27:08.199237 containerd[1510]: time="2025-02-13T15:27:08.199085754Z" level=error msg="Failed to destroy network for sandbox \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.201030 containerd[1510]: time="2025-02-13T15:27:08.200812895Z" level=error msg="encountered an error cleaning up failed sandbox \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.201273 containerd[1510]: time="2025-02-13T15:27:08.200891496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.201472 kubelet[2787]: E0213 15:27:08.201396 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.201472 kubelet[2787]: E0213 15:27:08.201469 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:08.202079 kubelet[2787]: E0213 15:27:08.201489 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9kzwx" Feb 13 15:27:08.202079 kubelet[2787]: E0213 15:27:08.201535 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9kzwx_calico-system(0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9kzwx" podUID="0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b" Feb 13 15:27:08.211159 containerd[1510]: time="2025-02-13T15:27:08.211098702Z" level=error msg="Failed to destroy network for sandbox \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.211492 containerd[1510]: time="2025-02-13T15:27:08.211450386Z" level=error msg="Failed to destroy network for sandbox \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.211827 containerd[1510]: time="2025-02-13T15:27:08.211765910Z" level=error msg="encountered an error cleaning up failed sandbox \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.211870 containerd[1510]: time="2025-02-13T15:27:08.211846911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.212135 kubelet[2787]: E0213 15:27:08.212100 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.212201 kubelet[2787]: E0213 15:27:08.212156 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:08.212201 kubelet[2787]: E0213 15:27:08.212176 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" Feb 13 15:27:08.212253 kubelet[2787]: E0213 15:27:08.212217 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7688c5dd9f-wlqt5_calico-apiserver(ea674312-9393-4b86-8bbb-6e3a7a4e2c23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7688c5dd9f-wlqt5_calico-apiserver(ea674312-9393-4b86-8bbb-6e3a7a4e2c23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" podUID="ea674312-9393-4b86-8bbb-6e3a7a4e2c23" Feb 13 15:27:08.213106 containerd[1510]: time="2025-02-13T15:27:08.212871043Z" level=error msg="encountered an error cleaning up failed sandbox \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.213798 containerd[1510]: time="2025-02-13T15:27:08.213673453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.214209 kubelet[2787]: E0213 15:27:08.213902 2787 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:27:08.214209 kubelet[2787]: E0213 15:27:08.214045 2787 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:08.214209 kubelet[2787]: E0213 15:27:08.214064 2787 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbdv" Feb 13 15:27:08.214466 kubelet[2787]: E0213 15:27:08.214104 2787 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8cbdv_kube-system(400c597f-5d24-4504-a9fc-e7fa6fcc44df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8cbdv" podUID="400c597f-5d24-4504-a9fc-e7fa6fcc44df" Feb 13 15:27:08.216998 containerd[1510]: time="2025-02-13T15:27:08.216959974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:08.218828 containerd[1510]: time="2025-02-13T15:27:08.218558153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 15:27:08.220672 containerd[1510]: time="2025-02-13T15:27:08.219960570Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:08.222525 containerd[1510]: time="2025-02-13T15:27:08.222475401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:08.224421 containerd[1510]: time="2025-02-13T15:27:08.224376065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 5.608286142s" Feb 13 15:27:08.224490 containerd[1510]: time="2025-02-13T15:27:08.224427265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 15:27:08.232910 containerd[1510]: time="2025-02-13T15:27:08.232864649Z" level=info msg="CreateContainer within sandbox \"00ffefa689b9391f089b13613eee6dab447e23e9a397b5e52c62fb0fcd20269c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:27:08.251540 containerd[1510]: time="2025-02-13T15:27:08.251475877Z" level=info msg="CreateContainer within sandbox \"00ffefa689b9391f089b13613eee6dab447e23e9a397b5e52c62fb0fcd20269c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b3de0c64b65e3f0cbd2809e4a2c98bcca4c81bed6265fa2deafba02c60da6524\"" Feb 13 15:27:08.252452 containerd[1510]: time="2025-02-13T15:27:08.252385768Z" level=info msg="StartContainer for \"b3de0c64b65e3f0cbd2809e4a2c98bcca4c81bed6265fa2deafba02c60da6524\"" Feb 13 15:27:08.286162 systemd[1]: Started cri-containerd-b3de0c64b65e3f0cbd2809e4a2c98bcca4c81bed6265fa2deafba02c60da6524.scope - libcontainer container b3de0c64b65e3f0cbd2809e4a2c98bcca4c81bed6265fa2deafba02c60da6524. Feb 13 15:27:08.325290 containerd[1510]: time="2025-02-13T15:27:08.325224502Z" level=info msg="StartContainer for \"b3de0c64b65e3f0cbd2809e4a2c98bcca4c81bed6265fa2deafba02c60da6524\" returns successfully" Feb 13 15:27:08.447934 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:27:08.448177 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:27:08.462335 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f-shm.mount: Deactivated successfully. Feb 13 15:27:08.462736 systemd[1]: run-netns-cni\x2d6236048b\x2d7752\x2d300b\x2dcee3\x2df3b698d6c986.mount: Deactivated successfully. Feb 13 15:27:08.462889 systemd[1]: run-netns-cni\x2dbb174c94\x2dd4ec\x2dc392\x2d3286\x2de0a7b109fcea.mount: Deactivated successfully. Feb 13 15:27:08.463299 systemd[1]: run-netns-cni\x2dce2cb0df\x2d7306\x2db0f0\x2d2a95\x2dc9d0a135d964.mount: Deactivated successfully. Feb 13 15:27:08.463444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2777163758.mount: Deactivated successfully. Feb 13 15:27:08.939952 kubelet[2787]: I0213 15:27:08.939110 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4" Feb 13 15:27:08.941555 containerd[1510]: time="2025-02-13T15:27:08.940675413Z" level=info msg="StopPodSandbox for \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\"" Feb 13 15:27:08.941555 containerd[1510]: time="2025-02-13T15:27:08.940980336Z" level=info msg="Ensure that sandbox dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4 in task-service has been cleanup successfully" Feb 13 15:27:08.943635 containerd[1510]: time="2025-02-13T15:27:08.943597809Z" level=info msg="TearDown network for sandbox \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\" successfully" Feb 13 15:27:08.943891 containerd[1510]: time="2025-02-13T15:27:08.943776531Z" level=info msg="StopPodSandbox for \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\" returns successfully" Feb 13 15:27:08.945518 containerd[1510]: time="2025-02-13T15:27:08.945373470Z" level=info msg="StopPodSandbox for \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\"" Feb 13 15:27:08.945518 containerd[1510]: time="2025-02-13T15:27:08.945479912Z" level=info msg="TearDown network for sandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\" successfully" Feb 13 15:27:08.945518 containerd[1510]: time="2025-02-13T15:27:08.945490032Z" level=info msg="StopPodSandbox for \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\" returns successfully" Feb 13 15:27:08.945682 systemd[1]: run-netns-cni\x2d7b0f3998\x2da26c\x2d8ee3\x2d0842\x2dc7a8aeb8ca69.mount: Deactivated successfully. Feb 13 15:27:08.948342 containerd[1510]: time="2025-02-13T15:27:08.948137144Z" level=info msg="StopPodSandbox for \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\"" Feb 13 15:27:08.948342 containerd[1510]: time="2025-02-13T15:27:08.948266346Z" level=info msg="TearDown network for sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\" successfully" Feb 13 15:27:08.948342 containerd[1510]: time="2025-02-13T15:27:08.948277106Z" level=info msg="StopPodSandbox for \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\" returns successfully" Feb 13 15:27:08.949008 containerd[1510]: time="2025-02-13T15:27:08.948637190Z" level=info msg="StopPodSandbox for \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\"" Feb 13 15:27:08.949008 containerd[1510]: time="2025-02-13T15:27:08.948730671Z" level=info msg="TearDown network for sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" successfully" Feb 13 15:27:08.949008 containerd[1510]: time="2025-02-13T15:27:08.948741072Z" level=info msg="StopPodSandbox for \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" returns successfully" Feb 13 15:27:08.949402 containerd[1510]: time="2025-02-13T15:27:08.949378999Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\"" Feb 13 15:27:08.949651 containerd[1510]: time="2025-02-13T15:27:08.949543321Z" level=info msg="TearDown network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" successfully" Feb 13 15:27:08.949651 containerd[1510]: time="2025-02-13T15:27:08.949564402Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" returns successfully" Feb 13 15:27:08.950624 containerd[1510]: time="2025-02-13T15:27:08.950236730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:27:08.950704 kubelet[2787]: I0213 15:27:08.950677 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7" Feb 13 15:27:08.953641 containerd[1510]: time="2025-02-13T15:27:08.953596011Z" level=info msg="StopPodSandbox for \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\"" Feb 13 15:27:08.954694 containerd[1510]: time="2025-02-13T15:27:08.954427381Z" level=info msg="Ensure that sandbox ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7 in task-service has been cleanup successfully" Feb 13 15:27:08.955084 containerd[1510]: time="2025-02-13T15:27:08.954934748Z" level=info msg="TearDown network for sandbox \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\" successfully" Feb 13 15:27:08.955084 containerd[1510]: time="2025-02-13T15:27:08.954964428Z" level=info msg="StopPodSandbox for \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\" returns successfully" Feb 13 15:27:08.958486 containerd[1510]: time="2025-02-13T15:27:08.958366870Z" level=info msg="StopPodSandbox for \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\"" Feb 13 15:27:08.958740 containerd[1510]: time="2025-02-13T15:27:08.958467111Z" level=info msg="TearDown network for sandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\" successfully" Feb 13 15:27:08.958740 containerd[1510]: time="2025-02-13T15:27:08.958614273Z" level=info msg="StopPodSandbox for \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\" returns successfully" Feb 13 15:27:08.959475 systemd[1]: run-netns-cni\x2d31be5127\x2d371b\x2db09d\x2d6130\x2db14977b0189d.mount: Deactivated successfully. Feb 13 15:27:08.961121 containerd[1510]: time="2025-02-13T15:27:08.960950221Z" level=info msg="StopPodSandbox for \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\"" Feb 13 15:27:08.961353 containerd[1510]: time="2025-02-13T15:27:08.961308946Z" level=info msg="TearDown network for sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\" successfully" Feb 13 15:27:08.961540 containerd[1510]: time="2025-02-13T15:27:08.961432627Z" level=info msg="StopPodSandbox for \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\" returns successfully" Feb 13 15:27:08.976647 containerd[1510]: time="2025-02-13T15:27:08.976604613Z" level=info msg="StopPodSandbox for \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\"" Feb 13 15:27:08.977229 containerd[1510]: time="2025-02-13T15:27:08.977202101Z" level=info msg="TearDown network for sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" successfully" Feb 13 15:27:08.977229 containerd[1510]: time="2025-02-13T15:27:08.977223861Z" level=info msg="StopPodSandbox for \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" returns successfully" Feb 13 15:27:08.978506 kubelet[2787]: I0213 15:27:08.978435 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f" Feb 13 15:27:08.979240 containerd[1510]: time="2025-02-13T15:27:08.978300034Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\"" Feb 13 15:27:08.979952 containerd[1510]: time="2025-02-13T15:27:08.979180325Z" level=info msg="StopPodSandbox for \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\"" Feb 13 15:27:08.981559 containerd[1510]: time="2025-02-13T15:27:08.980995667Z" level=info msg="TearDown network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" successfully" Feb 13 15:27:08.981651 containerd[1510]: time="2025-02-13T15:27:08.981555474Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" returns successfully" Feb 13 15:27:08.982619 containerd[1510]: time="2025-02-13T15:27:08.982185162Z" level=info msg="Ensure that sandbox 5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f in task-service has been cleanup successfully" Feb 13 15:27:08.985634 containerd[1510]: time="2025-02-13T15:27:08.985595844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:27:08.988670 kubelet[2787]: I0213 15:27:08.986977 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35" Feb 13 15:27:08.988435 systemd[1]: run-netns-cni\x2dcd927f00\x2d11aa\x2d0c3a\x2dc5c3\x2d4c7e9a660598.mount: Deactivated successfully. Feb 13 15:27:08.989748 containerd[1510]: time="2025-02-13T15:27:08.989160648Z" level=info msg="TearDown network for sandbox \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\" successfully" Feb 13 15:27:08.990487 containerd[1510]: time="2025-02-13T15:27:08.990457823Z" level=info msg="StopPodSandbox for \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\" returns successfully" Feb 13 15:27:08.991132 containerd[1510]: time="2025-02-13T15:27:08.987875112Z" level=info msg="StopPodSandbox for \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\"" Feb 13 15:27:08.991557 containerd[1510]: time="2025-02-13T15:27:08.991533837Z" level=info msg="Ensure that sandbox ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35 in task-service has been cleanup successfully" Feb 13 15:27:08.992254 containerd[1510]: time="2025-02-13T15:27:08.992223085Z" level=info msg="StopPodSandbox for \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\"" Feb 13 15:27:08.992333 containerd[1510]: time="2025-02-13T15:27:08.992316726Z" level=info msg="TearDown network for sandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\" successfully" Feb 13 15:27:08.992333 containerd[1510]: time="2025-02-13T15:27:08.992329606Z" level=info msg="StopPodSandbox for \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\" returns successfully" Feb 13 15:27:08.994400 containerd[1510]: time="2025-02-13T15:27:08.994365991Z" level=info msg="StopPodSandbox for \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\"" Feb 13 15:27:08.994484 containerd[1510]: time="2025-02-13T15:27:08.994466113Z" level=info msg="TearDown network for sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\" successfully" Feb 13 15:27:08.994484 containerd[1510]: time="2025-02-13T15:27:08.994475873Z" level=info msg="StopPodSandbox for \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\" returns successfully" Feb 13 15:27:08.996247 containerd[1510]: time="2025-02-13T15:27:08.996194134Z" level=info msg="TearDown network for sandbox \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\" successfully" Feb 13 15:27:08.996247 containerd[1510]: time="2025-02-13T15:27:08.996222494Z" level=info msg="StopPodSandbox for \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\" returns successfully" Feb 13 15:27:08.996712 containerd[1510]: time="2025-02-13T15:27:08.996550298Z" level=info msg="StopPodSandbox for \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\"" Feb 13 15:27:08.997254 containerd[1510]: time="2025-02-13T15:27:08.997176506Z" level=info msg="TearDown network for sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" successfully" Feb 13 15:27:08.997254 containerd[1510]: time="2025-02-13T15:27:08.997201266Z" level=info msg="StopPodSandbox for \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" returns successfully" Feb 13 15:27:08.998509 containerd[1510]: time="2025-02-13T15:27:08.998405161Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\"" Feb 13 15:27:08.999619 containerd[1510]: time="2025-02-13T15:27:08.999208851Z" level=info msg="StopPodSandbox for \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\"" Feb 13 15:27:08.999619 containerd[1510]: time="2025-02-13T15:27:08.999317812Z" level=info msg="TearDown network for sandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\" successfully" Feb 13 15:27:08.999619 containerd[1510]: time="2025-02-13T15:27:08.999328492Z" level=info msg="StopPodSandbox for \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\" returns successfully" Feb 13 15:27:08.999925 containerd[1510]: time="2025-02-13T15:27:08.999727377Z" level=info msg="StopPodSandbox for \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\"" Feb 13 15:27:08.999925 containerd[1510]: time="2025-02-13T15:27:08.999827218Z" level=info msg="TearDown network for sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\" successfully" Feb 13 15:27:08.999925 containerd[1510]: time="2025-02-13T15:27:08.999837659Z" level=info msg="StopPodSandbox for \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\" returns successfully" Feb 13 15:27:09.000547 containerd[1510]: time="2025-02-13T15:27:09.000402345Z" level=info msg="StopPodSandbox for \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\"" Feb 13 15:27:09.000666 containerd[1510]: time="2025-02-13T15:27:09.000634068Z" level=info msg="TearDown network for sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" successfully" Feb 13 15:27:09.000666 containerd[1510]: time="2025-02-13T15:27:09.000656869Z" level=info msg="StopPodSandbox for \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" returns successfully" Feb 13 15:27:09.000777 containerd[1510]: time="2025-02-13T15:27:09.000755350Z" level=info msg="TearDown network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" successfully" Feb 13 15:27:09.000829 containerd[1510]: time="2025-02-13T15:27:09.000818511Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" returns successfully" Feb 13 15:27:09.002000 containerd[1510]: time="2025-02-13T15:27:09.001971124Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\"" Feb 13 15:27:09.002272 containerd[1510]: time="2025-02-13T15:27:09.002186487Z" level=info msg="TearDown network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" successfully" Feb 13 15:27:09.002272 containerd[1510]: time="2025-02-13T15:27:09.002205687Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" returns successfully" Feb 13 15:27:09.003618 containerd[1510]: time="2025-02-13T15:27:09.003305420Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\"" Feb 13 15:27:09.003618 containerd[1510]: time="2025-02-13T15:27:09.003414502Z" level=info msg="TearDown network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" successfully" Feb 13 15:27:09.003618 containerd[1510]: time="2025-02-13T15:27:09.003424982Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" returns successfully" Feb 13 15:27:09.004407 containerd[1510]: time="2025-02-13T15:27:09.004357353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:6,}" Feb 13 15:27:09.004760 containerd[1510]: time="2025-02-13T15:27:09.004528635Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\"" Feb 13 15:27:09.005060 containerd[1510]: time="2025-02-13T15:27:09.005031441Z" level=info msg="TearDown network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" successfully" Feb 13 15:27:09.005279 containerd[1510]: time="2025-02-13T15:27:09.005136402Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" returns successfully" Feb 13 15:27:09.006344 containerd[1510]: time="2025-02-13T15:27:09.006129494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:6,}" Feb 13 15:27:09.010228 kubelet[2787]: I0213 15:27:09.009695 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779" Feb 13 15:27:09.012937 containerd[1510]: time="2025-02-13T15:27:09.011803282Z" level=info msg="StopPodSandbox for \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\"" Feb 13 15:27:09.020713 containerd[1510]: time="2025-02-13T15:27:09.020669029Z" level=info msg="Ensure that sandbox 96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779 in task-service has been cleanup successfully" Feb 13 15:27:09.024095 containerd[1510]: time="2025-02-13T15:27:09.024003389Z" level=info msg="TearDown network for sandbox \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\" successfully" Feb 13 15:27:09.024095 containerd[1510]: time="2025-02-13T15:27:09.024085950Z" level=info msg="StopPodSandbox for \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\" returns successfully" Feb 13 15:27:09.027711 containerd[1510]: time="2025-02-13T15:27:09.027663672Z" level=info msg="StopPodSandbox for \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\"" Feb 13 15:27:09.027871 containerd[1510]: time="2025-02-13T15:27:09.027768154Z" level=info msg="TearDown network for sandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\" successfully" Feb 13 15:27:09.027871 containerd[1510]: time="2025-02-13T15:27:09.027779354Z" level=info msg="StopPodSandbox for \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\" returns successfully" Feb 13 15:27:09.031777 containerd[1510]: time="2025-02-13T15:27:09.031634800Z" level=info msg="StopPodSandbox for \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\"" Feb 13 15:27:09.031883 containerd[1510]: time="2025-02-13T15:27:09.031765522Z" level=info msg="TearDown network for sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\" successfully" Feb 13 15:27:09.031883 containerd[1510]: time="2025-02-13T15:27:09.031799242Z" level=info msg="StopPodSandbox for \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\" returns successfully" Feb 13 15:27:09.033416 containerd[1510]: time="2025-02-13T15:27:09.033099538Z" level=info msg="StopPodSandbox for \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\"" Feb 13 15:27:09.033416 containerd[1510]: time="2025-02-13T15:27:09.033214259Z" level=info msg="TearDown network for sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" successfully" Feb 13 15:27:09.033416 containerd[1510]: time="2025-02-13T15:27:09.033225259Z" level=info msg="StopPodSandbox for \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" returns successfully" Feb 13 15:27:09.035756 containerd[1510]: time="2025-02-13T15:27:09.034399233Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\"" Feb 13 15:27:09.035756 containerd[1510]: time="2025-02-13T15:27:09.034505634Z" level=info msg="TearDown network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" successfully" Feb 13 15:27:09.035756 containerd[1510]: time="2025-02-13T15:27:09.034515755Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" returns successfully" Feb 13 15:27:09.035756 containerd[1510]: time="2025-02-13T15:27:09.034866799Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\"" Feb 13 15:27:09.035756 containerd[1510]: time="2025-02-13T15:27:09.035162602Z" level=info msg="TearDown network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" successfully" Feb 13 15:27:09.035756 containerd[1510]: time="2025-02-13T15:27:09.035177362Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" returns successfully" Feb 13 15:27:09.035756 containerd[1510]: time="2025-02-13T15:27:09.035660808Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\"" Feb 13 15:27:09.035756 containerd[1510]: time="2025-02-13T15:27:09.035740689Z" level=info msg="TearDown network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" successfully" Feb 13 15:27:09.035756 containerd[1510]: time="2025-02-13T15:27:09.035749929Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" returns successfully" Feb 13 15:27:09.036497 kubelet[2787]: I0213 15:27:09.036454 2787 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8" Feb 13 15:27:09.038632 containerd[1510]: time="2025-02-13T15:27:09.038577563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:7,}" Feb 13 15:27:09.038853 containerd[1510]: time="2025-02-13T15:27:09.038822286Z" level=info msg="StopPodSandbox for \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\"" Feb 13 15:27:09.039888 containerd[1510]: time="2025-02-13T15:27:09.039656536Z" level=info msg="Ensure that sandbox 96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8 in task-service has been cleanup successfully" Feb 13 15:27:09.042413 containerd[1510]: time="2025-02-13T15:27:09.042377529Z" level=info msg="TearDown network for sandbox \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\" successfully" Feb 13 15:27:09.042568 containerd[1510]: time="2025-02-13T15:27:09.042427049Z" level=info msg="StopPodSandbox for \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\" returns successfully" Feb 13 15:27:09.048270 containerd[1510]: time="2025-02-13T15:27:09.048227919Z" level=info msg="StopPodSandbox for \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\"" Feb 13 15:27:09.048409 containerd[1510]: time="2025-02-13T15:27:09.048372161Z" level=info msg="TearDown network for sandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\" successfully" Feb 13 15:27:09.048409 containerd[1510]: time="2025-02-13T15:27:09.048385321Z" level=info msg="StopPodSandbox for \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\" returns successfully" Feb 13 15:27:09.054622 containerd[1510]: time="2025-02-13T15:27:09.054573035Z" level=info msg="StopPodSandbox for \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\"" Feb 13 15:27:09.054830 containerd[1510]: time="2025-02-13T15:27:09.054712117Z" level=info msg="TearDown network for sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\" successfully" Feb 13 15:27:09.054830 containerd[1510]: time="2025-02-13T15:27:09.054733437Z" level=info msg="StopPodSandbox for \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\" returns successfully" Feb 13 15:27:09.055918 containerd[1510]: time="2025-02-13T15:27:09.055651288Z" level=info msg="StopPodSandbox for \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\"" Feb 13 15:27:09.055918 containerd[1510]: time="2025-02-13T15:27:09.055789210Z" level=info msg="TearDown network for sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" successfully" Feb 13 15:27:09.055918 containerd[1510]: time="2025-02-13T15:27:09.055802170Z" level=info msg="StopPodSandbox for \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" returns successfully" Feb 13 15:27:09.056678 containerd[1510]: time="2025-02-13T15:27:09.056458338Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\"" Feb 13 15:27:09.058923 containerd[1510]: time="2025-02-13T15:27:09.057700873Z" level=info msg="TearDown network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" successfully" Feb 13 15:27:09.058923 containerd[1510]: time="2025-02-13T15:27:09.057735633Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" returns successfully" Feb 13 15:27:09.060128 containerd[1510]: time="2025-02-13T15:27:09.060080741Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\"" Feb 13 15:27:09.061093 containerd[1510]: time="2025-02-13T15:27:09.060541827Z" level=info msg="TearDown network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" successfully" Feb 13 15:27:09.061093 containerd[1510]: time="2025-02-13T15:27:09.061086193Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" returns successfully" Feb 13 15:27:09.065609 containerd[1510]: time="2025-02-13T15:27:09.065541007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:6,}" Feb 13 15:27:09.380690 systemd-networkd[1409]: cali495be1a78f5: Link UP Feb 13 15:27:09.387404 systemd-networkd[1409]: cali495be1a78f5: Gained carrier Feb 13 15:27:09.415606 kubelet[2787]: I0213 15:27:09.414134 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m87gv" podStartSLOduration=2.593622585 podStartE2EDuration="17.414117265s" podCreationTimestamp="2025-02-13 15:26:52 +0000 UTC" firstStartedPulling="2025-02-13 15:26:53.404734635 +0000 UTC m=+23.074866116" lastFinishedPulling="2025-02-13 15:27:08.225229355 +0000 UTC m=+37.895360796" observedRunningTime="2025-02-13 15:27:09.029064689 +0000 UTC m=+38.699196130" watchObservedRunningTime="2025-02-13 15:27:09.414117265 +0000 UTC m=+39.084248706" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.108 [INFO][4655] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.141 [INFO][4655] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0 calico-apiserver-7688c5dd9f- calico-apiserver 09d81451-756a-493d-9c79-aa5ef1100e0d 719 0 2025-02-13 15:26:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7688c5dd9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230-0-1-9-12db063e25 calico-apiserver-7688c5dd9f-n9z22 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali495be1a78f5 [] []}} ContainerID="4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-n9z22" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.143 [INFO][4655] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-n9z22" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.245 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" HandleID="k8s-pod-network.4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" Workload="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.282 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" HandleID="k8s-pod-network.4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" Workload="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031f6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230-0-1-9-12db063e25", "pod":"calico-apiserver-7688c5dd9f-n9z22", "timestamp":"2025-02-13 15:27:09.245241521 +0000 UTC"}, Hostname:"ci-4230-0-1-9-12db063e25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.282 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.283 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.283 [INFO][4712] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-1-9-12db063e25' Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.286 [INFO][4712] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.309 [INFO][4712] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.322 [INFO][4712] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.325 [INFO][4712] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.332 [INFO][4712] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.332 [INFO][4712] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.334 [INFO][4712] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.343 [INFO][4712] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.353 [INFO][4712] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.1/26] block=192.168.35.0/26 handle="k8s-pod-network.4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.353 [INFO][4712] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.1/26] handle="k8s-pod-network.4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.353 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:27:09.425179 containerd[1510]: 2025-02-13 15:27:09.355 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.1/26] IPv6=[] ContainerID="4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" HandleID="k8s-pod-network.4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" Workload="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0" Feb 13 15:27:09.425801 containerd[1510]: 2025-02-13 15:27:09.367 [INFO][4655] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-n9z22" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0", GenerateName:"calico-apiserver-7688c5dd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"09d81451-756a-493d-9c79-aa5ef1100e0d", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7688c5dd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"", Pod:"calico-apiserver-7688c5dd9f-n9z22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali495be1a78f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.425801 containerd[1510]: 2025-02-13 15:27:09.367 [INFO][4655] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.1/32] ContainerID="4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-n9z22" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0" Feb 13 15:27:09.425801 containerd[1510]: 2025-02-13 15:27:09.368 [INFO][4655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali495be1a78f5 ContainerID="4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-n9z22" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0" Feb 13 15:27:09.425801 containerd[1510]: 2025-02-13 15:27:09.389 [INFO][4655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-n9z22" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0" Feb 13 15:27:09.425801 containerd[1510]: 2025-02-13 15:27:09.390 [INFO][4655] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-n9z22" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0", GenerateName:"calico-apiserver-7688c5dd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"09d81451-756a-493d-9c79-aa5ef1100e0d", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7688c5dd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf", Pod:"calico-apiserver-7688c5dd9f-n9z22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali495be1a78f5", MAC:"02:2e:b8:89:ea:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.425801 containerd[1510]: 2025-02-13 15:27:09.417 [INFO][4655] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-n9z22" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--n9z22-eth0" Feb 13 15:27:09.467651 systemd[1]: run-netns-cni\x2d69db2eed\x2d4a3e\x2da660\x2dd187\x2d790b824bd5aa.mount: Deactivated successfully. Feb 13 15:27:09.467929 systemd[1]: run-netns-cni\x2d154edfc2\x2d6d46\x2d784d\x2dd895\x2d18e03601e2f4.mount: Deactivated successfully. Feb 13 15:27:09.467983 systemd[1]: run-netns-cni\x2deba068d5\x2d80cc\x2d01de\x2d3b60\x2dd56dea27b898.mount: Deactivated successfully. Feb 13 15:27:09.480898 systemd-networkd[1409]: calid76ff6c4707: Link UP Feb 13 15:27:09.481393 systemd-networkd[1409]: calid76ff6c4707: Gained carrier Feb 13 15:27:09.491401 containerd[1510]: time="2025-02-13T15:27:09.490448061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:09.491401 containerd[1510]: time="2025-02-13T15:27:09.490591662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:09.491401 containerd[1510]: time="2025-02-13T15:27:09.490611302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.491401 containerd[1510]: time="2025-02-13T15:27:09.490716704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.519114 systemd[1]: run-containerd-runc-k8s.io-4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf-runc.2MHAFR.mount: Deactivated successfully. Feb 13 15:27:09.532175 systemd[1]: Started cri-containerd-4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf.scope - libcontainer container 4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf. Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.039 [INFO][4643] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.076 [INFO][4643] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0 calico-apiserver-7688c5dd9f- calico-apiserver ea674312-9393-4b86-8bbb-6e3a7a4e2c23 718 0 2025-02-13 15:26:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7688c5dd9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230-0-1-9-12db063e25 calico-apiserver-7688c5dd9f-wlqt5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid76ff6c4707 [] []}} ContainerID="1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-wlqt5" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.077 [INFO][4643] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-wlqt5" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.262 [INFO][4673] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" HandleID="k8s-pod-network.1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" Workload="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.311 [INFO][4673] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" HandleID="k8s-pod-network.1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" Workload="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011a2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230-0-1-9-12db063e25", "pod":"calico-apiserver-7688c5dd9f-wlqt5", "timestamp":"2025-02-13 15:27:09.260722826 +0000 UTC"}, Hostname:"ci-4230-0-1-9-12db063e25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.311 [INFO][4673] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.355 [INFO][4673] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.356 [INFO][4673] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-1-9-12db063e25' Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.361 [INFO][4673] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.369 [INFO][4673] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.398 [INFO][4673] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.416 [INFO][4673] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.423 [INFO][4673] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.424 [INFO][4673] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.434 [INFO][4673] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411 Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.446 [INFO][4673] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.466 [INFO][4673] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.2/26] block=192.168.35.0/26 handle="k8s-pod-network.1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.467 [INFO][4673] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.2/26] handle="k8s-pod-network.1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.467 [INFO][4673] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:27:09.533115 containerd[1510]: 2025-02-13 15:27:09.470 [INFO][4673] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.2/26] IPv6=[] ContainerID="1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" HandleID="k8s-pod-network.1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" Workload="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0" Feb 13 15:27:09.534131 containerd[1510]: 2025-02-13 15:27:09.476 [INFO][4643] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-wlqt5" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0", GenerateName:"calico-apiserver-7688c5dd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea674312-9393-4b86-8bbb-6e3a7a4e2c23", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7688c5dd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"", Pod:"calico-apiserver-7688c5dd9f-wlqt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid76ff6c4707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.534131 containerd[1510]: 2025-02-13 15:27:09.477 [INFO][4643] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.2/32] ContainerID="1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-wlqt5" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0" Feb 13 15:27:09.534131 containerd[1510]: 2025-02-13 15:27:09.477 [INFO][4643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid76ff6c4707 ContainerID="1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-wlqt5" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0" Feb 13 15:27:09.534131 containerd[1510]: 2025-02-13 15:27:09.492 [INFO][4643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-wlqt5" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0" Feb 13 15:27:09.534131 containerd[1510]: 2025-02-13 15:27:09.493 [INFO][4643] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-wlqt5" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0", GenerateName:"calico-apiserver-7688c5dd9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea674312-9393-4b86-8bbb-6e3a7a4e2c23", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7688c5dd9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411", Pod:"calico-apiserver-7688c5dd9f-wlqt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid76ff6c4707", MAC:"2e:68:31:db:15:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.534131 containerd[1510]: 2025-02-13 15:27:09.521 [INFO][4643] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411" Namespace="calico-apiserver" Pod="calico-apiserver-7688c5dd9f-wlqt5" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--apiserver--7688c5dd9f--wlqt5-eth0" Feb 13 15:27:09.574097 systemd-networkd[1409]: calie3dfb93f797: Link UP Feb 13 15:27:09.577380 systemd-networkd[1409]: calie3dfb93f797: Gained carrier Feb 13 15:27:09.599746 containerd[1510]: time="2025-02-13T15:27:09.599269565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:09.599746 containerd[1510]: time="2025-02-13T15:27:09.599332166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:09.599746 containerd[1510]: time="2025-02-13T15:27:09.599347966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.599746 containerd[1510]: time="2025-02-13T15:27:09.599429087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.170 [INFO][4692] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.200 [INFO][4692] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0 csi-node-driver- calico-system 0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b 609 0 2025-02-13 15:26:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4230-0-1-9-12db063e25 csi-node-driver-9kzwx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie3dfb93f797 [] []}} ContainerID="01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" Namespace="calico-system" Pod="csi-node-driver-9kzwx" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.200 [INFO][4692] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" Namespace="calico-system" Pod="csi-node-driver-9kzwx" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.275 [INFO][4723] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" HandleID="k8s-pod-network.01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" Workload="ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.319 [INFO][4723] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" HandleID="k8s-pod-network.01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" Workload="ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a97f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230-0-1-9-12db063e25", "pod":"csi-node-driver-9kzwx", "timestamp":"2025-02-13 15:27:09.275710606 +0000 UTC"}, Hostname:"ci-4230-0-1-9-12db063e25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.319 [INFO][4723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.468 [INFO][4723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.468 [INFO][4723] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-1-9-12db063e25' Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.472 [INFO][4723] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.483 [INFO][4723] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.502 [INFO][4723] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.508 [INFO][4723] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.520 [INFO][4723] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.521 [INFO][4723] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.529 [INFO][4723] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.538 [INFO][4723] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.555 [INFO][4723] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.3/26] block=192.168.35.0/26 handle="k8s-pod-network.01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.556 [INFO][4723] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.3/26] handle="k8s-pod-network.01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.556 [INFO][4723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:27:09.613105 containerd[1510]: 2025-02-13 15:27:09.556 [INFO][4723] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.3/26] IPv6=[] ContainerID="01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" HandleID="k8s-pod-network.01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" Workload="ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0" Feb 13 15:27:09.613666 containerd[1510]: 2025-02-13 15:27:09.563 [INFO][4692] cni-plugin/k8s.go 386: Populated endpoint ContainerID="01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" Namespace="calico-system" Pod="csi-node-driver-9kzwx" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b", ResourceVersion:"609", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"", Pod:"csi-node-driver-9kzwx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3dfb93f797", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.613666 containerd[1510]: 2025-02-13 15:27:09.564 [INFO][4692] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.3/32] ContainerID="01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" Namespace="calico-system" Pod="csi-node-driver-9kzwx" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0" Feb 13 15:27:09.613666 containerd[1510]: 2025-02-13 15:27:09.564 [INFO][4692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3dfb93f797 ContainerID="01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" Namespace="calico-system" Pod="csi-node-driver-9kzwx" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0" Feb 13 15:27:09.613666 containerd[1510]: 2025-02-13 15:27:09.576 [INFO][4692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" Namespace="calico-system" Pod="csi-node-driver-9kzwx" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0" Feb 13 15:27:09.613666 containerd[1510]: 2025-02-13 15:27:09.581 [INFO][4692] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" Namespace="calico-system" Pod="csi-node-driver-9kzwx" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b", ResourceVersion:"609", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b", Pod:"csi-node-driver-9kzwx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie3dfb93f797", MAC:"4a:ca:8a:0d:a3:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.613666 containerd[1510]: 2025-02-13 15:27:09.604 [INFO][4692] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b" Namespace="calico-system" Pod="csi-node-driver-9kzwx" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-csi--node--driver--9kzwx-eth0" Feb 13 15:27:09.650230 systemd[1]: Started cri-containerd-1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411.scope - libcontainer container 1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411. Feb 13 15:27:09.658530 containerd[1510]: time="2025-02-13T15:27:09.657817067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-n9z22,Uid:09d81451-756a-493d-9c79-aa5ef1100e0d,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf\"" Feb 13 15:27:09.662938 systemd-networkd[1409]: cali6b94de3de8b: Link UP Feb 13 15:27:09.663239 systemd-networkd[1409]: cali6b94de3de8b: Gained carrier Feb 13 15:27:09.667225 containerd[1510]: time="2025-02-13T15:27:09.667188379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.188 [INFO][4664] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.233 [INFO][4664] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0 calico-kube-controllers-788df79c7b- calico-system b22d464a-68ec-4133-9687-c90c18294db8 720 0 2025-02-13 15:26:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:788df79c7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4230-0-1-9-12db063e25 calico-kube-controllers-788df79c7b-m4kpj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6b94de3de8b [] []}} ContainerID="930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" Namespace="calico-system" Pod="calico-kube-controllers-788df79c7b-m4kpj" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.233 [INFO][4664] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" Namespace="calico-system" Pod="calico-kube-controllers-788df79c7b-m4kpj" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.384 [INFO][4735] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" HandleID="k8s-pod-network.930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" Workload="ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.424 [INFO][4735] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" HandleID="k8s-pod-network.930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" Workload="ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316d20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230-0-1-9-12db063e25", "pod":"calico-kube-controllers-788df79c7b-m4kpj", "timestamp":"2025-02-13 15:27:09.383804182 +0000 UTC"}, Hostname:"ci-4230-0-1-9-12db063e25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.424 [INFO][4735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.556 [INFO][4735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.556 [INFO][4735] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-1-9-12db063e25' Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.558 [INFO][4735] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.567 [INFO][4735] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.582 [INFO][4735] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.588 [INFO][4735] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.603 [INFO][4735] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.603 [INFO][4735] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.615 [INFO][4735] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.624 [INFO][4735] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.636 [INFO][4735] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.4/26] block=192.168.35.0/26 handle="k8s-pod-network.930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.636 [INFO][4735] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.4/26] handle="k8s-pod-network.930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.636 [INFO][4735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:27:09.691127 containerd[1510]: 2025-02-13 15:27:09.636 [INFO][4735] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.4/26] IPv6=[] ContainerID="930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" HandleID="k8s-pod-network.930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" Workload="ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0" Feb 13 15:27:09.691722 containerd[1510]: 2025-02-13 15:27:09.647 [INFO][4664] cni-plugin/k8s.go 386: Populated endpoint ContainerID="930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" Namespace="calico-system" Pod="calico-kube-controllers-788df79c7b-m4kpj" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0", GenerateName:"calico-kube-controllers-788df79c7b-", Namespace:"calico-system", SelfLink:"", UID:"b22d464a-68ec-4133-9687-c90c18294db8", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"788df79c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"", Pod:"calico-kube-controllers-788df79c7b-m4kpj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b94de3de8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.691722 containerd[1510]: 2025-02-13 15:27:09.649 [INFO][4664] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.4/32] ContainerID="930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" Namespace="calico-system" Pod="calico-kube-controllers-788df79c7b-m4kpj" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0" Feb 13 15:27:09.691722 containerd[1510]: 2025-02-13 15:27:09.653 [INFO][4664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b94de3de8b ContainerID="930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" Namespace="calico-system" Pod="calico-kube-controllers-788df79c7b-m4kpj" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0" Feb 13 15:27:09.691722 containerd[1510]: 2025-02-13 15:27:09.662 [INFO][4664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" Namespace="calico-system" Pod="calico-kube-controllers-788df79c7b-m4kpj" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0" Feb 13 15:27:09.691722 containerd[1510]: 2025-02-13 15:27:09.666 [INFO][4664] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" Namespace="calico-system" Pod="calico-kube-controllers-788df79c7b-m4kpj" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0", GenerateName:"calico-kube-controllers-788df79c7b-", Namespace:"calico-system", SelfLink:"", UID:"b22d464a-68ec-4133-9687-c90c18294db8", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"788df79c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca", Pod:"calico-kube-controllers-788df79c7b-m4kpj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b94de3de8b", MAC:"ae:c1:d7:66:f1:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.691722 containerd[1510]: 2025-02-13 15:27:09.686 [INFO][4664] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca" Namespace="calico-system" Pod="calico-kube-controllers-788df79c7b-m4kpj" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-calico--kube--controllers--788df79c7b--m4kpj-eth0" Feb 13 15:27:09.720066 containerd[1510]: time="2025-02-13T15:27:09.719795810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:09.720066 containerd[1510]: time="2025-02-13T15:27:09.719942252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:09.720066 containerd[1510]: time="2025-02-13T15:27:09.719960612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.721091 containerd[1510]: time="2025-02-13T15:27:09.720817862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.755668 containerd[1510]: time="2025-02-13T15:27:09.753456014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:09.755668 containerd[1510]: time="2025-02-13T15:27:09.754263343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:09.755668 containerd[1510]: time="2025-02-13T15:27:09.754281824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.755668 containerd[1510]: time="2025-02-13T15:27:09.754416705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.754692 systemd[1]: Started cri-containerd-01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b.scope - libcontainer container 01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b. Feb 13 15:27:09.774297 systemd-networkd[1409]: calied72c58016b: Link UP Feb 13 15:27:09.776397 systemd-networkd[1409]: calied72c58016b: Gained carrier Feb 13 15:27:09.791336 containerd[1510]: time="2025-02-13T15:27:09.790782101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7688c5dd9f-wlqt5,Uid:ea674312-9393-4b86-8bbb-6e3a7a4e2c23,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411\"" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.234 [INFO][4696] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.280 [INFO][4696] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0 coredns-7db6d8ff4d- kube-system 400c597f-5d24-4504-a9fc-e7fa6fcc44df 717 0 2025-02-13 15:26:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230-0-1-9-12db063e25 coredns-7db6d8ff4d-8cbdv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calied72c58016b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbdv" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.282 [INFO][4696] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbdv" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.404 [INFO][4740] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" HandleID="k8s-pod-network.fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" Workload="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.444 [INFO][4740] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" HandleID="k8s-pod-network.fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" Workload="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318ab0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230-0-1-9-12db063e25", "pod":"coredns-7db6d8ff4d-8cbdv", "timestamp":"2025-02-13 15:27:09.404675592 +0000 UTC"}, Hostname:"ci-4230-0-1-9-12db063e25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.444 [INFO][4740] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.637 [INFO][4740] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.637 [INFO][4740] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-1-9-12db063e25' Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.644 [INFO][4740] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.676 [INFO][4740] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.693 [INFO][4740] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.698 [INFO][4740] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.704 [INFO][4740] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.705 [INFO][4740] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.710 [INFO][4740] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9 Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.726 [INFO][4740] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.747 [INFO][4740] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.5/26] block=192.168.35.0/26 handle="k8s-pod-network.fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.747 [INFO][4740] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.5/26] handle="k8s-pod-network.fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.750 [INFO][4740] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:27:09.815671 containerd[1510]: 2025-02-13 15:27:09.752 [INFO][4740] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.5/26] IPv6=[] ContainerID="fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" HandleID="k8s-pod-network.fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" Workload="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0" Feb 13 15:27:09.816637 containerd[1510]: 2025-02-13 15:27:09.757 [INFO][4696] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbdv" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"400c597f-5d24-4504-a9fc-e7fa6fcc44df", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"", Pod:"coredns-7db6d8ff4d-8cbdv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied72c58016b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.816637 containerd[1510]: 2025-02-13 15:27:09.761 [INFO][4696] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.5/32] ContainerID="fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbdv" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0" Feb 13 15:27:09.816637 containerd[1510]: 2025-02-13 15:27:09.761 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied72c58016b ContainerID="fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbdv" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0" Feb 13 15:27:09.816637 containerd[1510]: 2025-02-13 15:27:09.776 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbdv" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0" Feb 13 15:27:09.816637 containerd[1510]: 2025-02-13 15:27:09.777 [INFO][4696] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbdv" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"400c597f-5d24-4504-a9fc-e7fa6fcc44df", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9", Pod:"coredns-7db6d8ff4d-8cbdv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied72c58016b", MAC:"5a:fc:4a:13:c5:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.816637 containerd[1510]: 2025-02-13 15:27:09.798 [INFO][4696] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbdv" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--8cbdv-eth0" Feb 13 15:27:09.825284 systemd[1]: Started cri-containerd-930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca.scope - libcontainer container 930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca. Feb 13 15:27:09.848920 systemd-networkd[1409]: cali7939d2f0842: Link UP Feb 13 15:27:09.850100 systemd-networkd[1409]: cali7939d2f0842: Gained carrier Feb 13 15:27:09.882573 containerd[1510]: time="2025-02-13T15:27:09.882511001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9kzwx,Uid:0b9e30d9-f00b-49ef-83d6-ebd7f08b8f7b,Namespace:calico-system,Attempt:7,} returns sandbox id \"01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b\"" Feb 13 15:27:09.893138 containerd[1510]: time="2025-02-13T15:27:09.892242757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:09.893138 containerd[1510]: time="2025-02-13T15:27:09.892315998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:09.893138 containerd[1510]: time="2025-02-13T15:27:09.892327558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.893138 containerd[1510]: time="2025-02-13T15:27:09.892409359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.223 [INFO][4674] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.285 [INFO][4674] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0 coredns-7db6d8ff4d- kube-system 6bd1757e-903b-41e7-a1bd-8c95ee35dbf5 716 0 2025-02-13 15:26:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230-0-1-9-12db063e25 coredns-7db6d8ff4d-77zlz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7939d2f0842 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-77zlz" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.285 [INFO][4674] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-77zlz" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.417 [INFO][4739] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" HandleID="k8s-pod-network.32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" Workload="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.455 [INFO][4739] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" HandleID="k8s-pod-network.32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" Workload="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011a9d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230-0-1-9-12db063e25", "pod":"coredns-7db6d8ff4d-77zlz", "timestamp":"2025-02-13 15:27:09.417552427 +0000 UTC"}, Hostname:"ci-4230-0-1-9-12db063e25", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.455 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.749 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.749 [INFO][4739] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-1-9-12db063e25' Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.753 [INFO][4739] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.771 [INFO][4739] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.789 [INFO][4739] ipam/ipam.go 489: Trying affinity for 192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.793 [INFO][4739] ipam/ipam.go 155: Attempting to load block cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.802 [INFO][4739] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.806 [INFO][4739] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.811 [INFO][4739] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.819 [INFO][4739] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.832 [INFO][4739] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.35.6/26] block=192.168.35.0/26 handle="k8s-pod-network.32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.832 [INFO][4739] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.35.6/26] handle="k8s-pod-network.32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" host="ci-4230-0-1-9-12db063e25" Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.832 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:27:09.897642 containerd[1510]: 2025-02-13 15:27:09.832 [INFO][4739] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.6/26] IPv6=[] ContainerID="32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" HandleID="k8s-pod-network.32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" Workload="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0" Feb 13 15:27:09.899978 containerd[1510]: 2025-02-13 15:27:09.839 [INFO][4674] cni-plugin/k8s.go 386: Populated endpoint ContainerID="32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-77zlz" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6bd1757e-903b-41e7-a1bd-8c95ee35dbf5", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"", Pod:"coredns-7db6d8ff4d-77zlz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7939d2f0842", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.899978 containerd[1510]: 2025-02-13 15:27:09.839 [INFO][4674] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.35.6/32] ContainerID="32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-77zlz" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0" Feb 13 15:27:09.899978 containerd[1510]: 2025-02-13 15:27:09.839 [INFO][4674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7939d2f0842 ContainerID="32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-77zlz" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0" Feb 13 15:27:09.899978 containerd[1510]: 2025-02-13 15:27:09.854 [INFO][4674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-77zlz" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0" Feb 13 15:27:09.899978 containerd[1510]: 2025-02-13 15:27:09.867 [INFO][4674] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-77zlz" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6bd1757e-903b-41e7-a1bd-8c95ee35dbf5", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 26, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-1-9-12db063e25", ContainerID:"32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c", Pod:"coredns-7db6d8ff4d-77zlz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7939d2f0842", MAC:"a2:5a:ff:d9:4b:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:27:09.899978 containerd[1510]: 2025-02-13 15:27:09.889 [INFO][4674] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-77zlz" WorkloadEndpoint="ci--4230--0--1--9--12db063e25-k8s-coredns--7db6d8ff4d--77zlz-eth0" Feb 13 15:27:09.944894 containerd[1510]: time="2025-02-13T15:27:09.943976698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:09.944894 containerd[1510]: time="2025-02-13T15:27:09.944051899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:09.944894 containerd[1510]: time="2025-02-13T15:27:09.944068459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.944894 containerd[1510]: time="2025-02-13T15:27:09.944151140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:09.950890 systemd[1]: Started cri-containerd-fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9.scope - libcontainer container fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9. Feb 13 15:27:09.976064 systemd[1]: Started cri-containerd-32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c.scope - libcontainer container 32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c. Feb 13 15:27:10.001623 containerd[1510]: time="2025-02-13T15:27:10.001574388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-788df79c7b-m4kpj,Uid:b22d464a-68ec-4133-9687-c90c18294db8,Namespace:calico-system,Attempt:6,} returns sandbox id \"930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca\"" Feb 13 15:27:10.069460 containerd[1510]: time="2025-02-13T15:27:10.069408303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbdv,Uid:400c597f-5d24-4504-a9fc-e7fa6fcc44df,Namespace:kube-system,Attempt:6,} returns sandbox id \"fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9\"" Feb 13 15:27:10.094859 kubelet[2787]: I0213 15:27:10.094799 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:27:10.100918 containerd[1510]: time="2025-02-13T15:27:10.100278225Z" level=info msg="CreateContainer within sandbox \"fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:27:10.106138 containerd[1510]: time="2025-02-13T15:27:10.106014812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-77zlz,Uid:6bd1757e-903b-41e7-a1bd-8c95ee35dbf5,Namespace:kube-system,Attempt:6,} returns sandbox id \"32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c\"" Feb 13 15:27:10.113724 containerd[1510]: time="2025-02-13T15:27:10.113666301Z" level=info msg="CreateContainer within sandbox \"32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:27:10.205802 containerd[1510]: time="2025-02-13T15:27:10.205597659Z" level=info msg="CreateContainer within sandbox \"fc042da6841073378b42019f7222425f824480fb667343b40879febb68eb99d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"40803d3ba8523f349048e1a2abb2de65701f30116979272b91a7652b71249d5b\"" Feb 13 15:27:10.218108 containerd[1510]: time="2025-02-13T15:27:10.218059325Z" level=info msg="StartContainer for \"40803d3ba8523f349048e1a2abb2de65701f30116979272b91a7652b71249d5b\"" Feb 13 15:27:10.276731 containerd[1510]: time="2025-02-13T15:27:10.276683411Z" level=info msg="CreateContainer within sandbox \"32725aa0e7ca17f6a92e461452d347b79c90c5a9975c1fda32a8b4fedd91153c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70aad1354568ca2e4a7557f2677664a6b227dae33752ff4b377163af45118226\"" Feb 13 15:27:10.282479 containerd[1510]: time="2025-02-13T15:27:10.282430879Z" level=info msg="StartContainer for \"70aad1354568ca2e4a7557f2677664a6b227dae33752ff4b377163af45118226\"" Feb 13 15:27:10.323752 systemd[1]: Started cri-containerd-40803d3ba8523f349048e1a2abb2de65701f30116979272b91a7652b71249d5b.scope - libcontainer container 40803d3ba8523f349048e1a2abb2de65701f30116979272b91a7652b71249d5b. Feb 13 15:27:10.349135 systemd[1]: Started cri-containerd-70aad1354568ca2e4a7557f2677664a6b227dae33752ff4b377163af45118226.scope - libcontainer container 70aad1354568ca2e4a7557f2677664a6b227dae33752ff4b377163af45118226. Feb 13 15:27:10.400497 containerd[1510]: time="2025-02-13T15:27:10.400297060Z" level=info msg="StartContainer for \"40803d3ba8523f349048e1a2abb2de65701f30116979272b91a7652b71249d5b\" returns successfully" Feb 13 15:27:10.410967 containerd[1510]: time="2025-02-13T15:27:10.410334857Z" level=info msg="StartContainer for \"70aad1354568ca2e4a7557f2677664a6b227dae33752ff4b377163af45118226\" returns successfully" Feb 13 15:27:10.562175 systemd-networkd[1409]: calid76ff6c4707: Gained IPv6LL Feb 13 15:27:10.882522 systemd-networkd[1409]: cali6b94de3de8b: Gained IPv6LL Feb 13 15:27:10.946458 systemd-networkd[1409]: cali495be1a78f5: Gained IPv6LL Feb 13 15:27:11.143503 kubelet[2787]: I0213 15:27:11.143332 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8cbdv" podStartSLOduration=26.143313208 podStartE2EDuration="26.143313208s" podCreationTimestamp="2025-02-13 15:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:11.125261722 +0000 UTC m=+40.795393203" watchObservedRunningTime="2025-02-13 15:27:11.143313208 +0000 UTC m=+40.813444649" Feb 13 15:27:11.175422 kubelet[2787]: I0213 15:27:11.175345 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-77zlz" podStartSLOduration=26.175323215 podStartE2EDuration="26.175323215s" podCreationTimestamp="2025-02-13 15:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:11.173867478 +0000 UTC m=+40.843998919" watchObservedRunningTime="2025-02-13 15:27:11.175323215 +0000 UTC m=+40.845454656" Feb 13 15:27:11.394091 systemd-networkd[1409]: calied72c58016b: Gained IPv6LL Feb 13 15:27:11.396273 systemd-networkd[1409]: calie3dfb93f797: Gained IPv6LL Feb 13 15:27:11.906169 systemd-networkd[1409]: cali7939d2f0842: Gained IPv6LL Feb 13 15:27:12.836894 containerd[1510]: time="2025-02-13T15:27:12.835021693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:12.836894 containerd[1510]: time="2025-02-13T15:27:12.836684511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 15:27:12.836894 containerd[1510]: time="2025-02-13T15:27:12.836701952Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:12.840266 containerd[1510]: time="2025-02-13T15:27:12.840215271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:12.841279 containerd[1510]: time="2025-02-13T15:27:12.841233802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.167223981s" Feb 13 15:27:12.841279 containerd[1510]: time="2025-02-13T15:27:12.841278963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:27:12.846277 containerd[1510]: time="2025-02-13T15:27:12.846180378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:27:12.848501 containerd[1510]: time="2025-02-13T15:27:12.848471163Z" level=info msg="CreateContainer within sandbox \"4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:27:12.870307 containerd[1510]: time="2025-02-13T15:27:12.870264808Z" level=info msg="CreateContainer within sandbox \"4b6f05741e6bca980f8d9f436daa4119a3f8b8de9659f9f14dd9316422ca1eaf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5cdb758ce6185f27f1bd6373c0483efa2ec8e0be87decc7b063e3ffcb71f973c\"" Feb 13 15:27:12.871671 containerd[1510]: time="2025-02-13T15:27:12.871644903Z" level=info msg="StartContainer for \"5cdb758ce6185f27f1bd6373c0483efa2ec8e0be87decc7b063e3ffcb71f973c\"" Feb 13 15:27:12.908262 systemd[1]: Started cri-containerd-5cdb758ce6185f27f1bd6373c0483efa2ec8e0be87decc7b063e3ffcb71f973c.scope - libcontainer container 5cdb758ce6185f27f1bd6373c0483efa2ec8e0be87decc7b063e3ffcb71f973c. Feb 13 15:27:12.943242 containerd[1510]: time="2025-02-13T15:27:12.943173504Z" level=info msg="StartContainer for \"5cdb758ce6185f27f1bd6373c0483efa2ec8e0be87decc7b063e3ffcb71f973c\" returns successfully" Feb 13 15:27:13.139220 kubelet[2787]: I0213 15:27:13.138649 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7688c5dd9f-n9z22" podStartSLOduration=16.958785529 podStartE2EDuration="20.138631179s" podCreationTimestamp="2025-02-13 15:26:53 +0000 UTC" firstStartedPulling="2025-02-13 15:27:09.665325037 +0000 UTC m=+39.335456478" lastFinishedPulling="2025-02-13 15:27:12.845170727 +0000 UTC m=+42.515302128" observedRunningTime="2025-02-13 15:27:13.13780481 +0000 UTC m=+42.807936251" watchObservedRunningTime="2025-02-13 15:27:13.138631179 +0000 UTC m=+42.808762620" Feb 13 15:27:13.344643 containerd[1510]: time="2025-02-13T15:27:13.344589515Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:13.346445 containerd[1510]: time="2025-02-13T15:27:13.346398095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:27:13.348151 containerd[1510]: time="2025-02-13T15:27:13.348116673Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 501.682132ms" Feb 13 15:27:13.348234 containerd[1510]: time="2025-02-13T15:27:13.348215114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:27:13.349469 containerd[1510]: time="2025-02-13T15:27:13.349392527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:27:13.350745 containerd[1510]: time="2025-02-13T15:27:13.350326738Z" level=info msg="CreateContainer within sandbox \"1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:27:13.367953 containerd[1510]: time="2025-02-13T15:27:13.367865250Z" level=info msg="CreateContainer within sandbox \"1f916784f74d4bcd3a89aabc5fa7d2e00e44fc4f64a200b223cd64783ea44411\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e8ab6abf947266a4b1a4af82b28b0564cc9fa10cac47ba2381f03da30b35892e\"" Feb 13 15:27:13.370949 containerd[1510]: time="2025-02-13T15:27:13.368900781Z" level=info msg="StartContainer for \"e8ab6abf947266a4b1a4af82b28b0564cc9fa10cac47ba2381f03da30b35892e\"" Feb 13 15:27:13.407401 systemd[1]: Started cri-containerd-e8ab6abf947266a4b1a4af82b28b0564cc9fa10cac47ba2381f03da30b35892e.scope - libcontainer container e8ab6abf947266a4b1a4af82b28b0564cc9fa10cac47ba2381f03da30b35892e. Feb 13 15:27:13.474976 containerd[1510]: time="2025-02-13T15:27:13.474878022Z" level=info msg="StartContainer for \"e8ab6abf947266a4b1a4af82b28b0564cc9fa10cac47ba2381f03da30b35892e\" returns successfully" Feb 13 15:27:14.134328 kubelet[2787]: I0213 15:27:14.134250 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:27:14.850302 containerd[1510]: time="2025-02-13T15:27:14.849613035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:14.854494 containerd[1510]: time="2025-02-13T15:27:14.852945951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 15:27:14.864950 containerd[1510]: time="2025-02-13T15:27:14.861513683Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:14.867930 containerd[1510]: time="2025-02-13T15:27:14.867513587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:14.869930 containerd[1510]: time="2025-02-13T15:27:14.868539518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.51910811s" Feb 13 15:27:14.870106 containerd[1510]: time="2025-02-13T15:27:14.870089255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 15:27:14.873322 containerd[1510]: time="2025-02-13T15:27:14.873289089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:27:14.877458 containerd[1510]: time="2025-02-13T15:27:14.877405373Z" level=info msg="CreateContainer within sandbox \"01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:27:14.909933 containerd[1510]: time="2025-02-13T15:27:14.909871921Z" level=info msg="CreateContainer within sandbox \"01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"66b0fcac48cb73deb6ddab4816535bbf8a37023adb912c70cca5af287d018bab\"" Feb 13 15:27:14.912415 containerd[1510]: time="2025-02-13T15:27:14.911844862Z" level=info msg="StartContainer for \"66b0fcac48cb73deb6ddab4816535bbf8a37023adb912c70cca5af287d018bab\"" Feb 13 15:27:14.960166 systemd[1]: Started cri-containerd-66b0fcac48cb73deb6ddab4816535bbf8a37023adb912c70cca5af287d018bab.scope - libcontainer container 66b0fcac48cb73deb6ddab4816535bbf8a37023adb912c70cca5af287d018bab. Feb 13 15:27:15.004392 containerd[1510]: time="2025-02-13T15:27:15.004347772Z" level=info msg="StartContainer for \"66b0fcac48cb73deb6ddab4816535bbf8a37023adb912c70cca5af287d018bab\" returns successfully" Feb 13 15:27:15.142561 kubelet[2787]: I0213 15:27:15.141616 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:27:15.415455 kubelet[2787]: I0213 15:27:15.415164 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:27:15.432718 kubelet[2787]: I0213 15:27:15.431950 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7688c5dd9f-wlqt5" podStartSLOduration=18.876684908 podStartE2EDuration="22.431930374s" podCreationTimestamp="2025-02-13 15:26:53 +0000 UTC" firstStartedPulling="2025-02-13 15:27:09.793831698 +0000 UTC m=+39.463963139" lastFinishedPulling="2025-02-13 15:27:13.349077164 +0000 UTC m=+43.019208605" observedRunningTime="2025-02-13 15:27:14.151687518 +0000 UTC m=+43.821818999" watchObservedRunningTime="2025-02-13 15:27:15.431930374 +0000 UTC m=+45.102061815" Feb 13 15:27:15.901967 kernel: bpftool[5483]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:27:16.190202 systemd-networkd[1409]: vxlan.calico: Link UP Feb 13 15:27:16.190210 systemd-networkd[1409]: vxlan.calico: Gained carrier Feb 13 15:27:16.494832 kubelet[2787]: I0213 15:27:16.494678 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:27:17.282782 systemd-networkd[1409]: vxlan.calico: Gained IPv6LL Feb 13 15:27:17.562735 containerd[1510]: time="2025-02-13T15:27:17.562545233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:17.564691 containerd[1510]: time="2025-02-13T15:27:17.564180090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 15:27:17.565948 containerd[1510]: time="2025-02-13T15:27:17.565789066Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:17.569719 containerd[1510]: time="2025-02-13T15:27:17.569334382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:17.570289 containerd[1510]: time="2025-02-13T15:27:17.570248311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.695280444s" Feb 13 15:27:17.570353 containerd[1510]: time="2025-02-13T15:27:17.570287151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 15:27:17.572153 containerd[1510]: time="2025-02-13T15:27:17.572106529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:27:17.589361 containerd[1510]: time="2025-02-13T15:27:17.589295222Z" level=info msg="CreateContainer within sandbox \"930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:27:17.619767 containerd[1510]: time="2025-02-13T15:27:17.619549326Z" level=info msg="CreateContainer within sandbox \"930e2bf76f2cbad2e5d3a74c8c16f7fe05ba97e7cab8e7cc4a50c064a33f0dca\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7308aa12c1e9e6e81cbf57f2d9f3fe8dea7e1de82c0c053f4b09eb5d45a11601\"" Feb 13 15:27:17.620343 containerd[1510]: time="2025-02-13T15:27:17.620302213Z" level=info msg="StartContainer for \"7308aa12c1e9e6e81cbf57f2d9f3fe8dea7e1de82c0c053f4b09eb5d45a11601\"" Feb 13 15:27:17.655229 systemd[1]: Started cri-containerd-7308aa12c1e9e6e81cbf57f2d9f3fe8dea7e1de82c0c053f4b09eb5d45a11601.scope - libcontainer container 7308aa12c1e9e6e81cbf57f2d9f3fe8dea7e1de82c0c053f4b09eb5d45a11601. Feb 13 15:27:17.700388 containerd[1510]: time="2025-02-13T15:27:17.700332497Z" level=info msg="StartContainer for \"7308aa12c1e9e6e81cbf57f2d9f3fe8dea7e1de82c0c053f4b09eb5d45a11601\" returns successfully" Feb 13 15:27:18.243953 kubelet[2787]: I0213 15:27:18.243261 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-788df79c7b-m4kpj" podStartSLOduration=17.676951324 podStartE2EDuration="25.243229096s" podCreationTimestamp="2025-02-13 15:26:53 +0000 UTC" firstStartedPulling="2025-02-13 15:27:10.005640796 +0000 UTC m=+39.675772237" lastFinishedPulling="2025-02-13 15:27:17.571918568 +0000 UTC m=+47.242050009" observedRunningTime="2025-02-13 15:27:18.179861353 +0000 UTC m=+47.849992794" watchObservedRunningTime="2025-02-13 15:27:18.243229096 +0000 UTC m=+47.913361057" Feb 13 15:27:19.225182 containerd[1510]: time="2025-02-13T15:27:19.225110902Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:19.226334 containerd[1510]: time="2025-02-13T15:27:19.226267033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 15:27:19.227256 containerd[1510]: time="2025-02-13T15:27:19.227186162Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:19.231065 containerd[1510]: time="2025-02-13T15:27:19.230988638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:19.232051 containerd[1510]: time="2025-02-13T15:27:19.231860087Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.659710236s" Feb 13 15:27:19.232051 containerd[1510]: time="2025-02-13T15:27:19.231898527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 15:27:19.235855 containerd[1510]: time="2025-02-13T15:27:19.235797604Z" level=info msg="CreateContainer within sandbox \"01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:27:19.255181 containerd[1510]: time="2025-02-13T15:27:19.255104550Z" level=info msg="CreateContainer within sandbox \"01e3f9a01f35f95c5d2474dd62feaed6128f6e735aa729820240fe5e786da14b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2fad6fe6ef03b95094e1e5e5cad604a4aa4f3496aefcbceb12462dbc187721b9\"" Feb 13 15:27:19.255945 containerd[1510]: time="2025-02-13T15:27:19.255795077Z" level=info msg="StartContainer for \"2fad6fe6ef03b95094e1e5e5cad604a4aa4f3496aefcbceb12462dbc187721b9\"" Feb 13 15:27:19.312249 systemd[1]: Started cri-containerd-2fad6fe6ef03b95094e1e5e5cad604a4aa4f3496aefcbceb12462dbc187721b9.scope - libcontainer container 2fad6fe6ef03b95094e1e5e5cad604a4aa4f3496aefcbceb12462dbc187721b9. Feb 13 15:27:19.355290 containerd[1510]: time="2025-02-13T15:27:19.355189754Z" level=info msg="StartContainer for \"2fad6fe6ef03b95094e1e5e5cad604a4aa4f3496aefcbceb12462dbc187721b9\" returns successfully" Feb 13 15:27:19.572211 kubelet[2787]: I0213 15:27:19.571702 2787 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:27:19.575280 kubelet[2787]: I0213 15:27:19.574953 2787 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:27:20.204885 kubelet[2787]: I0213 15:27:20.204785 2787 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9kzwx" podStartSLOduration=17.863325302 podStartE2EDuration="27.20476553s" podCreationTimestamp="2025-02-13 15:26:53 +0000 UTC" firstStartedPulling="2025-02-13 15:27:09.891436628 +0000 UTC m=+39.561568029" lastFinishedPulling="2025-02-13 15:27:19.232876816 +0000 UTC m=+48.903008257" observedRunningTime="2025-02-13 15:27:20.204270526 +0000 UTC m=+49.874401967" watchObservedRunningTime="2025-02-13 15:27:20.20476553 +0000 UTC m=+49.874896971" Feb 13 15:27:30.470368 containerd[1510]: time="2025-02-13T15:27:30.470114427Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\"" Feb 13 15:27:30.470368 containerd[1510]: time="2025-02-13T15:27:30.470289388Z" level=info msg="TearDown network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" successfully" Feb 13 15:27:30.470368 containerd[1510]: time="2025-02-13T15:27:30.470304948Z" level=info msg="StopPodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" returns successfully" Feb 13 15:27:30.473602 containerd[1510]: time="2025-02-13T15:27:30.471662159Z" level=info msg="RemovePodSandbox for \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\"" Feb 13 15:27:30.473602 containerd[1510]: time="2025-02-13T15:27:30.471703119Z" level=info msg="Forcibly stopping sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\"" Feb 13 15:27:30.473602 containerd[1510]: time="2025-02-13T15:27:30.471809200Z" level=info msg="TearDown network for sandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" successfully" Feb 13 15:27:30.475618 containerd[1510]: time="2025-02-13T15:27:30.475574309Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.475802 containerd[1510]: time="2025-02-13T15:27:30.475784431Z" level=info msg="RemovePodSandbox \"2351d50c54f38d5f327642704501f8aae6f4c086021c7d72adca617a230efc8f\" returns successfully" Feb 13 15:27:30.476850 containerd[1510]: time="2025-02-13T15:27:30.476803599Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\"" Feb 13 15:27:30.477175 containerd[1510]: time="2025-02-13T15:27:30.477156761Z" level=info msg="TearDown network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" successfully" Feb 13 15:27:30.477257 containerd[1510]: time="2025-02-13T15:27:30.477240802Z" level=info msg="StopPodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" returns successfully" Feb 13 15:27:30.477679 containerd[1510]: time="2025-02-13T15:27:30.477637805Z" level=info msg="RemovePodSandbox for \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\"" Feb 13 15:27:30.477745 containerd[1510]: time="2025-02-13T15:27:30.477681965Z" level=info msg="Forcibly stopping sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\"" Feb 13 15:27:30.477809 containerd[1510]: time="2025-02-13T15:27:30.477786646Z" level=info msg="TearDown network for sandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" successfully" Feb 13 15:27:30.482299 containerd[1510]: time="2025-02-13T15:27:30.482222161Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.482616 containerd[1510]: time="2025-02-13T15:27:30.482354682Z" level=info msg="RemovePodSandbox \"e501f3c7344383621bd336eda056780987060c04f28a63b15ca639a0a710926a\" returns successfully" Feb 13 15:27:30.482862 containerd[1510]: time="2025-02-13T15:27:30.482823845Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\"" Feb 13 15:27:30.482966 containerd[1510]: time="2025-02-13T15:27:30.482950326Z" level=info msg="TearDown network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" successfully" Feb 13 15:27:30.483212 containerd[1510]: time="2025-02-13T15:27:30.482966006Z" level=info msg="StopPodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" returns successfully" Feb 13 15:27:30.483645 containerd[1510]: time="2025-02-13T15:27:30.483601451Z" level=info msg="RemovePodSandbox for \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\"" Feb 13 15:27:30.483645 containerd[1510]: time="2025-02-13T15:27:30.483630972Z" level=info msg="Forcibly stopping sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\"" Feb 13 15:27:30.483832 containerd[1510]: time="2025-02-13T15:27:30.483708932Z" level=info msg="TearDown network for sandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" successfully" Feb 13 15:27:30.487176 containerd[1510]: time="2025-02-13T15:27:30.487138439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.487271 containerd[1510]: time="2025-02-13T15:27:30.487207519Z" level=info msg="RemovePodSandbox \"67f6b7117f83c5685cc10b71036b29f0739c4b93b834fb86b2654d12e1611c51\" returns successfully" Feb 13 15:27:30.488065 containerd[1510]: time="2025-02-13T15:27:30.487711763Z" level=info msg="StopPodSandbox for \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\"" Feb 13 15:27:30.488065 containerd[1510]: time="2025-02-13T15:27:30.487827724Z" level=info msg="TearDown network for sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" successfully" Feb 13 15:27:30.488065 containerd[1510]: time="2025-02-13T15:27:30.487838324Z" level=info msg="StopPodSandbox for \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" returns successfully" Feb 13 15:27:30.488234 containerd[1510]: time="2025-02-13T15:27:30.488171527Z" level=info msg="RemovePodSandbox for \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\"" Feb 13 15:27:30.488234 containerd[1510]: time="2025-02-13T15:27:30.488195687Z" level=info msg="Forcibly stopping sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\"" Feb 13 15:27:30.488279 containerd[1510]: time="2025-02-13T15:27:30.488256088Z" level=info msg="TearDown network for sandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" successfully" Feb 13 15:27:30.492098 containerd[1510]: time="2025-02-13T15:27:30.492053237Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.492339 containerd[1510]: time="2025-02-13T15:27:30.492144958Z" level=info msg="RemovePodSandbox \"5eac98324aed248dc63df8d68a351b946ff19012a0ea0c3c32b15b0fcc603c30\" returns successfully" Feb 13 15:27:30.492802 containerd[1510]: time="2025-02-13T15:27:30.492721282Z" level=info msg="StopPodSandbox for \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\"" Feb 13 15:27:30.492862 containerd[1510]: time="2025-02-13T15:27:30.492827203Z" level=info msg="TearDown network for sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\" successfully" Feb 13 15:27:30.492862 containerd[1510]: time="2025-02-13T15:27:30.492838963Z" level=info msg="StopPodSandbox for \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\" returns successfully" Feb 13 15:27:30.494805 containerd[1510]: time="2025-02-13T15:27:30.493299687Z" level=info msg="RemovePodSandbox for \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\"" Feb 13 15:27:30.494805 containerd[1510]: time="2025-02-13T15:27:30.493331527Z" level=info msg="Forcibly stopping sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\"" Feb 13 15:27:30.494805 containerd[1510]: time="2025-02-13T15:27:30.493395847Z" level=info msg="TearDown network for sandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\" successfully" Feb 13 15:27:30.497485 containerd[1510]: time="2025-02-13T15:27:30.497442639Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.497661 containerd[1510]: time="2025-02-13T15:27:30.497643840Z" level=info msg="RemovePodSandbox \"5fa556c797b52c01fb7d553aabc5eaefd764fe18090c27e086939ce9af00f542\" returns successfully" Feb 13 15:27:30.498344 containerd[1510]: time="2025-02-13T15:27:30.498300445Z" level=info msg="StopPodSandbox for \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\"" Feb 13 15:27:30.498521 containerd[1510]: time="2025-02-13T15:27:30.498495567Z" level=info msg="TearDown network for sandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\" successfully" Feb 13 15:27:30.498559 containerd[1510]: time="2025-02-13T15:27:30.498527287Z" level=info msg="StopPodSandbox for \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\" returns successfully" Feb 13 15:27:30.499296 containerd[1510]: time="2025-02-13T15:27:30.499217733Z" level=info msg="RemovePodSandbox for \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\"" Feb 13 15:27:30.499469 containerd[1510]: time="2025-02-13T15:27:30.499398254Z" level=info msg="Forcibly stopping sandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\"" Feb 13 15:27:30.499547 containerd[1510]: time="2025-02-13T15:27:30.499533215Z" level=info msg="TearDown network for sandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\" successfully" Feb 13 15:27:30.502887 containerd[1510]: time="2025-02-13T15:27:30.502846681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.503185 containerd[1510]: time="2025-02-13T15:27:30.503075963Z" level=info msg="RemovePodSandbox \"9e757723d6bb50a58d81983bc95bbe528515b040153c9520958b0bf07f8d7db9\" returns successfully" Feb 13 15:27:30.503622 containerd[1510]: time="2025-02-13T15:27:30.503596407Z" level=info msg="StopPodSandbox for \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\"" Feb 13 15:27:30.503726 containerd[1510]: time="2025-02-13T15:27:30.503709487Z" level=info msg="TearDown network for sandbox \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\" successfully" Feb 13 15:27:30.503759 containerd[1510]: time="2025-02-13T15:27:30.503724928Z" level=info msg="StopPodSandbox for \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\" returns successfully" Feb 13 15:27:30.504390 containerd[1510]: time="2025-02-13T15:27:30.504293892Z" level=info msg="RemovePodSandbox for \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\"" Feb 13 15:27:30.504566 containerd[1510]: time="2025-02-13T15:27:30.504486453Z" level=info msg="Forcibly stopping sandbox \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\"" Feb 13 15:27:30.504718 containerd[1510]: time="2025-02-13T15:27:30.504631175Z" level=info msg="TearDown network for sandbox \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\" successfully" Feb 13 15:27:30.508838 containerd[1510]: time="2025-02-13T15:27:30.508476804Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.508838 containerd[1510]: time="2025-02-13T15:27:30.508548645Z" level=info msg="RemovePodSandbox \"96ac99bc2ac6d1280645bea8d228b58d1cbf42b5634f1fec545b103b9caa8779\" returns successfully" Feb 13 15:27:30.509069 containerd[1510]: time="2025-02-13T15:27:30.509012329Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\"" Feb 13 15:27:30.509213 containerd[1510]: time="2025-02-13T15:27:30.509153530Z" level=info msg="TearDown network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" successfully" Feb 13 15:27:30.509213 containerd[1510]: time="2025-02-13T15:27:30.509175810Z" level=info msg="StopPodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" returns successfully" Feb 13 15:27:30.509928 containerd[1510]: time="2025-02-13T15:27:30.509746734Z" level=info msg="RemovePodSandbox for \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\"" Feb 13 15:27:30.509928 containerd[1510]: time="2025-02-13T15:27:30.509779655Z" level=info msg="Forcibly stopping sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\"" Feb 13 15:27:30.509928 containerd[1510]: time="2025-02-13T15:27:30.509858935Z" level=info msg="TearDown network for sandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" successfully" Feb 13 15:27:30.514100 containerd[1510]: time="2025-02-13T15:27:30.513938247Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.514100 containerd[1510]: time="2025-02-13T15:27:30.514012287Z" level=info msg="RemovePodSandbox \"ff394cdecf5e3863d808ce99f442db7df35065c00fd23341451998f72eb04a62\" returns successfully" Feb 13 15:27:30.514671 containerd[1510]: time="2025-02-13T15:27:30.514639732Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\"" Feb 13 15:27:30.514790 containerd[1510]: time="2025-02-13T15:27:30.514769893Z" level=info msg="TearDown network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" successfully" Feb 13 15:27:30.514833 containerd[1510]: time="2025-02-13T15:27:30.514791853Z" level=info msg="StopPodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" returns successfully" Feb 13 15:27:30.515356 containerd[1510]: time="2025-02-13T15:27:30.515325618Z" level=info msg="RemovePodSandbox for \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\"" Feb 13 15:27:30.515440 containerd[1510]: time="2025-02-13T15:27:30.515361338Z" level=info msg="Forcibly stopping sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\"" Feb 13 15:27:30.515552 containerd[1510]: time="2025-02-13T15:27:30.515527179Z" level=info msg="TearDown network for sandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" successfully" Feb 13 15:27:30.520027 containerd[1510]: time="2025-02-13T15:27:30.519947733Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.520027 containerd[1510]: time="2025-02-13T15:27:30.520027894Z" level=info msg="RemovePodSandbox \"12fcd1bf1c4af049633fa907e88afc58da507abab886b5495bd3c7d529344ae6\" returns successfully" Feb 13 15:27:30.520867 containerd[1510]: time="2025-02-13T15:27:30.520606579Z" level=info msg="StopPodSandbox for \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\"" Feb 13 15:27:30.520867 containerd[1510]: time="2025-02-13T15:27:30.520708219Z" level=info msg="TearDown network for sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" successfully" Feb 13 15:27:30.520867 containerd[1510]: time="2025-02-13T15:27:30.520717899Z" level=info msg="StopPodSandbox for \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" returns successfully" Feb 13 15:27:30.522922 containerd[1510]: time="2025-02-13T15:27:30.521574306Z" level=info msg="RemovePodSandbox for \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\"" Feb 13 15:27:30.522922 containerd[1510]: time="2025-02-13T15:27:30.522400393Z" level=info msg="Forcibly stopping sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\"" Feb 13 15:27:30.522922 containerd[1510]: time="2025-02-13T15:27:30.522510233Z" level=info msg="TearDown network for sandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" successfully" Feb 13 15:27:30.526802 containerd[1510]: time="2025-02-13T15:27:30.526763866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.526994 containerd[1510]: time="2025-02-13T15:27:30.526976468Z" level=info msg="RemovePodSandbox \"6e58847b3b91fa3fc201fb544ea8b0fe1f70cf6c24a2699a8d07a2bfa9689d2c\" returns successfully" Feb 13 15:27:30.527636 containerd[1510]: time="2025-02-13T15:27:30.527462832Z" level=info msg="StopPodSandbox for \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\"" Feb 13 15:27:30.527636 containerd[1510]: time="2025-02-13T15:27:30.527553472Z" level=info msg="TearDown network for sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\" successfully" Feb 13 15:27:30.527636 containerd[1510]: time="2025-02-13T15:27:30.527563433Z" level=info msg="StopPodSandbox for \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\" returns successfully" Feb 13 15:27:30.528689 containerd[1510]: time="2025-02-13T15:27:30.528057476Z" level=info msg="RemovePodSandbox for \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\"" Feb 13 15:27:30.528689 containerd[1510]: time="2025-02-13T15:27:30.528083557Z" level=info msg="Forcibly stopping sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\"" Feb 13 15:27:30.528689 containerd[1510]: time="2025-02-13T15:27:30.528160317Z" level=info msg="TearDown network for sandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\" successfully" Feb 13 15:27:30.531725 containerd[1510]: time="2025-02-13T15:27:30.531688545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.531932 containerd[1510]: time="2025-02-13T15:27:30.531901186Z" level=info msg="RemovePodSandbox \"c2f678a5ed3478b78b6fa3e4ac8ffff048f268c283fa6263fb75a66ad297648f\" returns successfully" Feb 13 15:27:30.532425 containerd[1510]: time="2025-02-13T15:27:30.532405270Z" level=info msg="StopPodSandbox for \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\"" Feb 13 15:27:30.532688 containerd[1510]: time="2025-02-13T15:27:30.532671912Z" level=info msg="TearDown network for sandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\" successfully" Feb 13 15:27:30.532837 containerd[1510]: time="2025-02-13T15:27:30.532755633Z" level=info msg="StopPodSandbox for \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\" returns successfully" Feb 13 15:27:30.533182 containerd[1510]: time="2025-02-13T15:27:30.533107356Z" level=info msg="RemovePodSandbox for \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\"" Feb 13 15:27:30.533182 containerd[1510]: time="2025-02-13T15:27:30.533152196Z" level=info msg="Forcibly stopping sandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\"" Feb 13 15:27:30.533362 containerd[1510]: time="2025-02-13T15:27:30.533259117Z" level=info msg="TearDown network for sandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\" successfully" Feb 13 15:27:30.536992 containerd[1510]: time="2025-02-13T15:27:30.536923985Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.537191 containerd[1510]: time="2025-02-13T15:27:30.536997866Z" level=info msg="RemovePodSandbox \"ae95d76b9a106e6718655ea56231a5b18d170a546aaae1c9be2f170d6078e343\" returns successfully" Feb 13 15:27:30.538260 containerd[1510]: time="2025-02-13T15:27:30.537717031Z" level=info msg="StopPodSandbox for \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\"" Feb 13 15:27:30.538260 containerd[1510]: time="2025-02-13T15:27:30.537862152Z" level=info msg="TearDown network for sandbox \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\" successfully" Feb 13 15:27:30.538260 containerd[1510]: time="2025-02-13T15:27:30.537879313Z" level=info msg="StopPodSandbox for \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\" returns successfully" Feb 13 15:27:30.539236 containerd[1510]: time="2025-02-13T15:27:30.538929921Z" level=info msg="RemovePodSandbox for \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\"" Feb 13 15:27:30.539236 containerd[1510]: time="2025-02-13T15:27:30.538970441Z" level=info msg="Forcibly stopping sandbox \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\"" Feb 13 15:27:30.539236 containerd[1510]: time="2025-02-13T15:27:30.539093562Z" level=info msg="TearDown network for sandbox \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\" successfully" Feb 13 15:27:30.542951 containerd[1510]: time="2025-02-13T15:27:30.542901792Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.543217 containerd[1510]: time="2025-02-13T15:27:30.543080873Z" level=info msg="RemovePodSandbox \"5678f20109b300aac4106388ac73a979a31b2a1463820a44b5b440e03953471f\" returns successfully" Feb 13 15:27:30.543822 containerd[1510]: time="2025-02-13T15:27:30.543647237Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\"" Feb 13 15:27:30.543822 containerd[1510]: time="2025-02-13T15:27:30.543761118Z" level=info msg="TearDown network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" successfully" Feb 13 15:27:30.543822 containerd[1510]: time="2025-02-13T15:27:30.543771438Z" level=info msg="StopPodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" returns successfully" Feb 13 15:27:30.544872 containerd[1510]: time="2025-02-13T15:27:30.544502164Z" level=info msg="RemovePodSandbox for \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\"" Feb 13 15:27:30.544872 containerd[1510]: time="2025-02-13T15:27:30.544531684Z" level=info msg="Forcibly stopping sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\"" Feb 13 15:27:30.545934 containerd[1510]: time="2025-02-13T15:27:30.545056568Z" level=info msg="TearDown network for sandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" successfully" Feb 13 15:27:30.550883 containerd[1510]: time="2025-02-13T15:27:30.550759933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.550883 containerd[1510]: time="2025-02-13T15:27:30.550826373Z" level=info msg="RemovePodSandbox \"12689e4d2e8913a9bbc1825631bc773cd1f9533aeb6645ae413b32f9e6b4af35\" returns successfully" Feb 13 15:27:30.551797 containerd[1510]: time="2025-02-13T15:27:30.551589339Z" level=info msg="StopPodSandbox for \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\"" Feb 13 15:27:30.551797 containerd[1510]: time="2025-02-13T15:27:30.551676060Z" level=info msg="TearDown network for sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" successfully" Feb 13 15:27:30.551797 containerd[1510]: time="2025-02-13T15:27:30.551685660Z" level=info msg="StopPodSandbox for \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" returns successfully" Feb 13 15:27:30.552619 containerd[1510]: time="2025-02-13T15:27:30.552421025Z" level=info msg="RemovePodSandbox for \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\"" Feb 13 15:27:30.552619 containerd[1510]: time="2025-02-13T15:27:30.552444306Z" level=info msg="Forcibly stopping sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\"" Feb 13 15:27:30.552840 containerd[1510]: time="2025-02-13T15:27:30.552784388Z" level=info msg="TearDown network for sandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" successfully" Feb 13 15:27:30.557640 containerd[1510]: time="2025-02-13T15:27:30.557518225Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.557895 containerd[1510]: time="2025-02-13T15:27:30.557808027Z" level=info msg="RemovePodSandbox \"7ee5ed4240cdb877878cac1042cb11051c966b1c82f0cb794e2706fec13aee0c\" returns successfully" Feb 13 15:27:30.558380 containerd[1510]: time="2025-02-13T15:27:30.558226791Z" level=info msg="StopPodSandbox for \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\"" Feb 13 15:27:30.558380 containerd[1510]: time="2025-02-13T15:27:30.558320591Z" level=info msg="TearDown network for sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\" successfully" Feb 13 15:27:30.558380 containerd[1510]: time="2025-02-13T15:27:30.558329831Z" level=info msg="StopPodSandbox for \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\" returns successfully" Feb 13 15:27:30.558664 containerd[1510]: time="2025-02-13T15:27:30.558631434Z" level=info msg="RemovePodSandbox for \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\"" Feb 13 15:27:30.558818 containerd[1510]: time="2025-02-13T15:27:30.558712354Z" level=info msg="Forcibly stopping sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\"" Feb 13 15:27:30.558872 containerd[1510]: time="2025-02-13T15:27:30.558858155Z" level=info msg="TearDown network for sandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\" successfully" Feb 13 15:27:30.562496 containerd[1510]: time="2025-02-13T15:27:30.562336222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.562496 containerd[1510]: time="2025-02-13T15:27:30.562400503Z" level=info msg="RemovePodSandbox \"a9503401bc2382323082260b1f0d33ee9422786d5c09c80975aed28a8e72668e\" returns successfully" Feb 13 15:27:30.562899 containerd[1510]: time="2025-02-13T15:27:30.562759546Z" level=info msg="StopPodSandbox for \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\"" Feb 13 15:27:30.562899 containerd[1510]: time="2025-02-13T15:27:30.562837186Z" level=info msg="TearDown network for sandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\" successfully" Feb 13 15:27:30.562899 containerd[1510]: time="2025-02-13T15:27:30.562845866Z" level=info msg="StopPodSandbox for \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\" returns successfully" Feb 13 15:27:30.563257 containerd[1510]: time="2025-02-13T15:27:30.563232069Z" level=info msg="RemovePodSandbox for \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\"" Feb 13 15:27:30.563312 containerd[1510]: time="2025-02-13T15:27:30.563272630Z" level=info msg="Forcibly stopping sandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\"" Feb 13 15:27:30.563379 containerd[1510]: time="2025-02-13T15:27:30.563360310Z" level=info msg="TearDown network for sandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\" successfully" Feb 13 15:27:30.570482 containerd[1510]: time="2025-02-13T15:27:30.570430645Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.570710 containerd[1510]: time="2025-02-13T15:27:30.570586406Z" level=info msg="RemovePodSandbox \"adcb0a0f55441661c9a5f50c42750a69e7bfaea13f19704c3e64946f1982ca38\" returns successfully" Feb 13 15:27:30.571660 containerd[1510]: time="2025-02-13T15:27:30.571345772Z" level=info msg="StopPodSandbox for \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\"" Feb 13 15:27:30.571660 containerd[1510]: time="2025-02-13T15:27:30.571443413Z" level=info msg="TearDown network for sandbox \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\" successfully" Feb 13 15:27:30.571660 containerd[1510]: time="2025-02-13T15:27:30.571452173Z" level=info msg="StopPodSandbox for \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\" returns successfully" Feb 13 15:27:30.572078 containerd[1510]: time="2025-02-13T15:27:30.572031538Z" level=info msg="RemovePodSandbox for \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\"" Feb 13 15:27:30.572078 containerd[1510]: time="2025-02-13T15:27:30.572055898Z" level=info msg="Forcibly stopping sandbox \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\"" Feb 13 15:27:30.572338 containerd[1510]: time="2025-02-13T15:27:30.572242099Z" level=info msg="TearDown network for sandbox \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\" successfully" Feb 13 15:27:30.589210 containerd[1510]: time="2025-02-13T15:27:30.588623626Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.589210 containerd[1510]: time="2025-02-13T15:27:30.588763028Z" level=info msg="RemovePodSandbox \"dd0d9c674e95795cc9b8542aa7b17023b1d23949eaadf096104170b2648feaf4\" returns successfully" Feb 13 15:27:30.590232 containerd[1510]: time="2025-02-13T15:27:30.589776075Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\"" Feb 13 15:27:30.590232 containerd[1510]: time="2025-02-13T15:27:30.590029357Z" level=info msg="TearDown network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" successfully" Feb 13 15:27:30.590232 containerd[1510]: time="2025-02-13T15:27:30.590053438Z" level=info msg="StopPodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" returns successfully" Feb 13 15:27:30.591392 containerd[1510]: time="2025-02-13T15:27:30.591080405Z" level=info msg="RemovePodSandbox for \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\"" Feb 13 15:27:30.591392 containerd[1510]: time="2025-02-13T15:27:30.591160366Z" level=info msg="Forcibly stopping sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\"" Feb 13 15:27:30.591392 containerd[1510]: time="2025-02-13T15:27:30.591305567Z" level=info msg="TearDown network for sandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" successfully" Feb 13 15:27:30.597104 containerd[1510]: time="2025-02-13T15:27:30.597001171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.597263 containerd[1510]: time="2025-02-13T15:27:30.597154813Z" level=info msg="RemovePodSandbox \"dcb44adee745a28b338180b30146109be101c35ca2b6728cce53b656e6cdf3c0\" returns successfully" Feb 13 15:27:30.597863 containerd[1510]: time="2025-02-13T15:27:30.597825018Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\"" Feb 13 15:27:30.598026 containerd[1510]: time="2025-02-13T15:27:30.598003459Z" level=info msg="TearDown network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" successfully" Feb 13 15:27:30.598080 containerd[1510]: time="2025-02-13T15:27:30.598028219Z" level=info msg="StopPodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" returns successfully" Feb 13 15:27:30.598580 containerd[1510]: time="2025-02-13T15:27:30.598525943Z" level=info msg="RemovePodSandbox for \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\"" Feb 13 15:27:30.598580 containerd[1510]: time="2025-02-13T15:27:30.598576344Z" level=info msg="Forcibly stopping sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\"" Feb 13 15:27:30.598693 containerd[1510]: time="2025-02-13T15:27:30.598671064Z" level=info msg="TearDown network for sandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" successfully" Feb 13 15:27:30.603835 containerd[1510]: time="2025-02-13T15:27:30.603766424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.604167 containerd[1510]: time="2025-02-13T15:27:30.603864345Z" level=info msg="RemovePodSandbox \"4f6547910fe17e0df357ced865ce22a7ebf60bfa334fd79b52ac5009b48c1d54\" returns successfully" Feb 13 15:27:30.604633 containerd[1510]: time="2025-02-13T15:27:30.604556270Z" level=info msg="StopPodSandbox for \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\"" Feb 13 15:27:30.605086 containerd[1510]: time="2025-02-13T15:27:30.604657111Z" level=info msg="TearDown network for sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" successfully" Feb 13 15:27:30.605086 containerd[1510]: time="2025-02-13T15:27:30.604668671Z" level=info msg="StopPodSandbox for \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" returns successfully" Feb 13 15:27:30.606030 containerd[1510]: time="2025-02-13T15:27:30.605516558Z" level=info msg="RemovePodSandbox for \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\"" Feb 13 15:27:30.606030 containerd[1510]: time="2025-02-13T15:27:30.605572838Z" level=info msg="Forcibly stopping sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\"" Feb 13 15:27:30.606030 containerd[1510]: time="2025-02-13T15:27:30.605712199Z" level=info msg="TearDown network for sandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" successfully" Feb 13 15:27:30.610883 containerd[1510]: time="2025-02-13T15:27:30.610837759Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.611113 containerd[1510]: time="2025-02-13T15:27:30.611094001Z" level=info msg="RemovePodSandbox \"e23a9cebd106c9641877fce751c818d66031a67540674ed1b1f12c6a17bc65bb\" returns successfully" Feb 13 15:27:30.611849 containerd[1510]: time="2025-02-13T15:27:30.611809686Z" level=info msg="StopPodSandbox for \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\"" Feb 13 15:27:30.612282 containerd[1510]: time="2025-02-13T15:27:30.612249170Z" level=info msg="TearDown network for sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\" successfully" Feb 13 15:27:30.612332 containerd[1510]: time="2025-02-13T15:27:30.612285370Z" level=info msg="StopPodSandbox for \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\" returns successfully" Feb 13 15:27:30.612788 containerd[1510]: time="2025-02-13T15:27:30.612755254Z" level=info msg="RemovePodSandbox for \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\"" Feb 13 15:27:30.612833 containerd[1510]: time="2025-02-13T15:27:30.612808334Z" level=info msg="Forcibly stopping sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\"" Feb 13 15:27:30.612986 containerd[1510]: time="2025-02-13T15:27:30.612958695Z" level=info msg="TearDown network for sandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\" successfully" Feb 13 15:27:30.617922 containerd[1510]: time="2025-02-13T15:27:30.617834173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.617922 containerd[1510]: time="2025-02-13T15:27:30.617925534Z" level=info msg="RemovePodSandbox \"6929926535d6325dec17f577078c7ae03381b089ac45f9abde287d7d9bf25a3d\" returns successfully" Feb 13 15:27:30.618765 containerd[1510]: time="2025-02-13T15:27:30.618627979Z" level=info msg="StopPodSandbox for \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\"" Feb 13 15:27:30.619528 containerd[1510]: time="2025-02-13T15:27:30.619228464Z" level=info msg="TearDown network for sandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\" successfully" Feb 13 15:27:30.619528 containerd[1510]: time="2025-02-13T15:27:30.619260344Z" level=info msg="StopPodSandbox for \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\" returns successfully" Feb 13 15:27:30.620547 containerd[1510]: time="2025-02-13T15:27:30.620505514Z" level=info msg="RemovePodSandbox for \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\"" Feb 13 15:27:30.621948 containerd[1510]: time="2025-02-13T15:27:30.620678475Z" level=info msg="Forcibly stopping sandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\"" Feb 13 15:27:30.621948 containerd[1510]: time="2025-02-13T15:27:30.620827556Z" level=info msg="TearDown network for sandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\" successfully" Feb 13 15:27:30.626697 containerd[1510]: time="2025-02-13T15:27:30.626650082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.626871 containerd[1510]: time="2025-02-13T15:27:30.626850923Z" level=info msg="RemovePodSandbox \"d54f8b70f0cceb73c43443ca6074e80e72ecd0001a999f26a0ffb61ea84841ab\" returns successfully" Feb 13 15:27:30.627628 containerd[1510]: time="2025-02-13T15:27:30.627591489Z" level=info msg="StopPodSandbox for \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\"" Feb 13 15:27:30.627833 containerd[1510]: time="2025-02-13T15:27:30.627805730Z" level=info msg="TearDown network for sandbox \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\" successfully" Feb 13 15:27:30.627868 containerd[1510]: time="2025-02-13T15:27:30.627836891Z" level=info msg="StopPodSandbox for \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\" returns successfully" Feb 13 15:27:30.628403 containerd[1510]: time="2025-02-13T15:27:30.628373015Z" level=info msg="RemovePodSandbox for \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\"" Feb 13 15:27:30.628462 containerd[1510]: time="2025-02-13T15:27:30.628414015Z" level=info msg="Forcibly stopping sandbox \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\"" Feb 13 15:27:30.628544 containerd[1510]: time="2025-02-13T15:27:30.628521616Z" level=info msg="TearDown network for sandbox \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\" successfully" Feb 13 15:27:30.634181 containerd[1510]: time="2025-02-13T15:27:30.634080539Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.634181 containerd[1510]: time="2025-02-13T15:27:30.634174500Z" level=info msg="RemovePodSandbox \"96e9047bc4ff30931c955d663b5e14652710f897ffb8478a1171ed9f21f87ca8\" returns successfully" Feb 13 15:27:30.634738 containerd[1510]: time="2025-02-13T15:27:30.634634543Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\"" Feb 13 15:27:30.634888 containerd[1510]: time="2025-02-13T15:27:30.634744784Z" level=info msg="TearDown network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" successfully" Feb 13 15:27:30.634888 containerd[1510]: time="2025-02-13T15:27:30.634756264Z" level=info msg="StopPodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" returns successfully" Feb 13 15:27:30.637071 containerd[1510]: time="2025-02-13T15:27:30.635398269Z" level=info msg="RemovePodSandbox for \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\"" Feb 13 15:27:30.637071 containerd[1510]: time="2025-02-13T15:27:30.635442350Z" level=info msg="Forcibly stopping sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\"" Feb 13 15:27:30.637071 containerd[1510]: time="2025-02-13T15:27:30.635556431Z" level=info msg="TearDown network for sandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" successfully" Feb 13 15:27:30.640540 containerd[1510]: time="2025-02-13T15:27:30.640393108Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.641720 containerd[1510]: time="2025-02-13T15:27:30.640588710Z" level=info msg="RemovePodSandbox \"ebd18fc266abf2eaedf6ab602a9680a57cfe9908ac6801f96b6cc3e650fb163e\" returns successfully" Feb 13 15:27:30.641720 containerd[1510]: time="2025-02-13T15:27:30.641384236Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\"" Feb 13 15:27:30.641720 containerd[1510]: time="2025-02-13T15:27:30.641559957Z" level=info msg="TearDown network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" successfully" Feb 13 15:27:30.641720 containerd[1510]: time="2025-02-13T15:27:30.641581997Z" level=info msg="StopPodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" returns successfully" Feb 13 15:27:30.642926 containerd[1510]: time="2025-02-13T15:27:30.642863087Z" level=info msg="RemovePodSandbox for \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\"" Feb 13 15:27:30.643044 containerd[1510]: time="2025-02-13T15:27:30.642902688Z" level=info msg="Forcibly stopping sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\"" Feb 13 15:27:30.643044 containerd[1510]: time="2025-02-13T15:27:30.643022809Z" level=info msg="TearDown network for sandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" successfully" Feb 13 15:27:30.646156 containerd[1510]: time="2025-02-13T15:27:30.646088672Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.646809 containerd[1510]: time="2025-02-13T15:27:30.646170313Z" level=info msg="RemovePodSandbox \"3a590c97b4d7b1ecbbdc4b14542271f22f5ebef6e377c6a04727f53670c6a82b\" returns successfully" Feb 13 15:27:30.647881 containerd[1510]: time="2025-02-13T15:27:30.647235641Z" level=info msg="StopPodSandbox for \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\"" Feb 13 15:27:30.647881 containerd[1510]: time="2025-02-13T15:27:30.647429483Z" level=info msg="TearDown network for sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" successfully" Feb 13 15:27:30.647881 containerd[1510]: time="2025-02-13T15:27:30.647449803Z" level=info msg="StopPodSandbox for \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" returns successfully" Feb 13 15:27:30.648973 containerd[1510]: time="2025-02-13T15:27:30.648532571Z" level=info msg="RemovePodSandbox for \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\"" Feb 13 15:27:30.648973 containerd[1510]: time="2025-02-13T15:27:30.648576772Z" level=info msg="Forcibly stopping sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\"" Feb 13 15:27:30.648973 containerd[1510]: time="2025-02-13T15:27:30.648666212Z" level=info msg="TearDown network for sandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" successfully" Feb 13 15:27:30.666665 containerd[1510]: time="2025-02-13T15:27:30.666427150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.666665 containerd[1510]: time="2025-02-13T15:27:30.666518431Z" level=info msg="RemovePodSandbox \"58505d974461af5d16f3269c2684d6317ddfde05be59f59c923f31c0b96f44ce\" returns successfully" Feb 13 15:27:30.667260 containerd[1510]: time="2025-02-13T15:27:30.667229596Z" level=info msg="StopPodSandbox for \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\"" Feb 13 15:27:30.667576 containerd[1510]: time="2025-02-13T15:27:30.667439158Z" level=info msg="TearDown network for sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\" successfully" Feb 13 15:27:30.667576 containerd[1510]: time="2025-02-13T15:27:30.667455598Z" level=info msg="StopPodSandbox for \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\" returns successfully" Feb 13 15:27:30.667794 containerd[1510]: time="2025-02-13T15:27:30.667769521Z" level=info msg="RemovePodSandbox for \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\"" Feb 13 15:27:30.667837 containerd[1510]: time="2025-02-13T15:27:30.667801921Z" level=info msg="Forcibly stopping sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\"" Feb 13 15:27:30.668007 containerd[1510]: time="2025-02-13T15:27:30.667987282Z" level=info msg="TearDown network for sandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\" successfully" Feb 13 15:27:30.671438 containerd[1510]: time="2025-02-13T15:27:30.671395709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.671541 containerd[1510]: time="2025-02-13T15:27:30.671468629Z" level=info msg="RemovePodSandbox \"b5f9a81677d1ac8aaf9963895af3f1bf25290e2581c801937f4958d4d9bb4a5b\" returns successfully" Feb 13 15:27:30.671942 containerd[1510]: time="2025-02-13T15:27:30.671894233Z" level=info msg="StopPodSandbox for \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\"" Feb 13 15:27:30.672216 containerd[1510]: time="2025-02-13T15:27:30.672176715Z" level=info msg="TearDown network for sandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\" successfully" Feb 13 15:27:30.672216 containerd[1510]: time="2025-02-13T15:27:30.672192675Z" level=info msg="StopPodSandbox for \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\" returns successfully" Feb 13 15:27:30.672618 containerd[1510]: time="2025-02-13T15:27:30.672595558Z" level=info msg="RemovePodSandbox for \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\"" Feb 13 15:27:30.672673 containerd[1510]: time="2025-02-13T15:27:30.672624758Z" level=info msg="Forcibly stopping sandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\"" Feb 13 15:27:30.672708 containerd[1510]: time="2025-02-13T15:27:30.672694079Z" level=info msg="TearDown network for sandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\" successfully" Feb 13 15:27:30.676431 containerd[1510]: time="2025-02-13T15:27:30.676375667Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.676589 containerd[1510]: time="2025-02-13T15:27:30.676453148Z" level=info msg="RemovePodSandbox \"797d58e21a059c0320c2c6963b3c2f16963318a8afed53d1c70ea027116ada37\" returns successfully" Feb 13 15:27:30.677238 containerd[1510]: time="2025-02-13T15:27:30.677038113Z" level=info msg="StopPodSandbox for \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\"" Feb 13 15:27:30.677238 containerd[1510]: time="2025-02-13T15:27:30.677155913Z" level=info msg="TearDown network for sandbox \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\" successfully" Feb 13 15:27:30.677238 containerd[1510]: time="2025-02-13T15:27:30.677168314Z" level=info msg="StopPodSandbox for \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\" returns successfully" Feb 13 15:27:30.678940 containerd[1510]: time="2025-02-13T15:27:30.677528596Z" level=info msg="RemovePodSandbox for \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\"" Feb 13 15:27:30.678940 containerd[1510]: time="2025-02-13T15:27:30.677556037Z" level=info msg="Forcibly stopping sandbox \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\"" Feb 13 15:27:30.678940 containerd[1510]: time="2025-02-13T15:27:30.677614637Z" level=info msg="TearDown network for sandbox \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\" successfully" Feb 13 15:27:30.680924 containerd[1510]: time="2025-02-13T15:27:30.680863102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.681135 containerd[1510]: time="2025-02-13T15:27:30.681093264Z" level=info msg="RemovePodSandbox \"ad8d2f086fa40a924a63aaf1ca2e4a3ac863d82c03ddd6a43b49929810abdc35\" returns successfully" Feb 13 15:27:30.681793 containerd[1510]: time="2025-02-13T15:27:30.681759989Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\"" Feb 13 15:27:30.681892 containerd[1510]: time="2025-02-13T15:27:30.681880190Z" level=info msg="TearDown network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" successfully" Feb 13 15:27:30.681949 containerd[1510]: time="2025-02-13T15:27:30.681892790Z" level=info msg="StopPodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" returns successfully" Feb 13 15:27:30.682956 containerd[1510]: time="2025-02-13T15:27:30.682304793Z" level=info msg="RemovePodSandbox for \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\"" Feb 13 15:27:30.682956 containerd[1510]: time="2025-02-13T15:27:30.682346594Z" level=info msg="Forcibly stopping sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\"" Feb 13 15:27:30.682956 containerd[1510]: time="2025-02-13T15:27:30.682442235Z" level=info msg="TearDown network for sandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" successfully" Feb 13 15:27:30.686900 containerd[1510]: time="2025-02-13T15:27:30.686853469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.687171 containerd[1510]: time="2025-02-13T15:27:30.687144591Z" level=info msg="RemovePodSandbox \"52c06876651faf6c823bfe4ebdd9c59357ca3c348fea16dd30e83e2c98229087\" returns successfully" Feb 13 15:27:30.687718 containerd[1510]: time="2025-02-13T15:27:30.687689595Z" level=info msg="StopPodSandbox for \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\"" Feb 13 15:27:30.687828 containerd[1510]: time="2025-02-13T15:27:30.687809956Z" level=info msg="TearDown network for sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" successfully" Feb 13 15:27:30.687875 containerd[1510]: time="2025-02-13T15:27:30.687827956Z" level=info msg="StopPodSandbox for \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" returns successfully" Feb 13 15:27:30.689946 containerd[1510]: time="2025-02-13T15:27:30.688329600Z" level=info msg="RemovePodSandbox for \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\"" Feb 13 15:27:30.689946 containerd[1510]: time="2025-02-13T15:27:30.688360480Z" level=info msg="Forcibly stopping sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\"" Feb 13 15:27:30.689946 containerd[1510]: time="2025-02-13T15:27:30.688431161Z" level=info msg="TearDown network for sandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" successfully" Feb 13 15:27:30.693079 containerd[1510]: time="2025-02-13T15:27:30.692776995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.693079 containerd[1510]: time="2025-02-13T15:27:30.692896996Z" level=info msg="RemovePodSandbox \"913ac59468e5e7f9022c7b3c8c1fb026c4fdc61c76ef5c47ab0c08e6730af896\" returns successfully" Feb 13 15:27:30.693475 containerd[1510]: time="2025-02-13T15:27:30.693428440Z" level=info msg="StopPodSandbox for \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\"" Feb 13 15:27:30.693559 containerd[1510]: time="2025-02-13T15:27:30.693541921Z" level=info msg="TearDown network for sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\" successfully" Feb 13 15:27:30.693559 containerd[1510]: time="2025-02-13T15:27:30.693553561Z" level=info msg="StopPodSandbox for \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\" returns successfully" Feb 13 15:27:30.694521 containerd[1510]: time="2025-02-13T15:27:30.694469448Z" level=info msg="RemovePodSandbox for \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\"" Feb 13 15:27:30.694521 containerd[1510]: time="2025-02-13T15:27:30.694505528Z" level=info msg="Forcibly stopping sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\"" Feb 13 15:27:30.694612 containerd[1510]: time="2025-02-13T15:27:30.694581969Z" level=info msg="TearDown network for sandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\" successfully" Feb 13 15:27:30.701398 containerd[1510]: time="2025-02-13T15:27:30.701332021Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.701398 containerd[1510]: time="2025-02-13T15:27:30.701412742Z" level=info msg="RemovePodSandbox \"7c65f0ae8174a9a3b727e3edeb3d391e9a2c6ad1551300842cd3774839c56aba\" returns successfully" Feb 13 15:27:30.701995 containerd[1510]: time="2025-02-13T15:27:30.701897545Z" level=info msg="StopPodSandbox for \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\"" Feb 13 15:27:30.702266 containerd[1510]: time="2025-02-13T15:27:30.702167948Z" level=info msg="TearDown network for sandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\" successfully" Feb 13 15:27:30.702266 containerd[1510]: time="2025-02-13T15:27:30.702187748Z" level=info msg="StopPodSandbox for \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\" returns successfully" Feb 13 15:27:30.703990 containerd[1510]: time="2025-02-13T15:27:30.702449630Z" level=info msg="RemovePodSandbox for \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\"" Feb 13 15:27:30.703990 containerd[1510]: time="2025-02-13T15:27:30.702477550Z" level=info msg="Forcibly stopping sandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\"" Feb 13 15:27:30.703990 containerd[1510]: time="2025-02-13T15:27:30.702544311Z" level=info msg="TearDown network for sandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\" successfully" Feb 13 15:27:30.706694 containerd[1510]: time="2025-02-13T15:27:30.706649822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.706884 containerd[1510]: time="2025-02-13T15:27:30.706862984Z" level=info msg="RemovePodSandbox \"1630fc018426270afae4722535dee88d7e0bb8c901be690d8ff96f6766e45519\" returns successfully" Feb 13 15:27:30.707750 containerd[1510]: time="2025-02-13T15:27:30.707722471Z" level=info msg="StopPodSandbox for \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\"" Feb 13 15:27:30.708020 containerd[1510]: time="2025-02-13T15:27:30.708002033Z" level=info msg="TearDown network for sandbox \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\" successfully" Feb 13 15:27:30.708085 containerd[1510]: time="2025-02-13T15:27:30.708071833Z" level=info msg="StopPodSandbox for \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\" returns successfully" Feb 13 15:27:30.708422 containerd[1510]: time="2025-02-13T15:27:30.708400996Z" level=info msg="RemovePodSandbox for \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\"" Feb 13 15:27:30.708542 containerd[1510]: time="2025-02-13T15:27:30.708507037Z" level=info msg="Forcibly stopping sandbox \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\"" Feb 13 15:27:30.708684 containerd[1510]: time="2025-02-13T15:27:30.708666678Z" level=info msg="TearDown network for sandbox \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\" successfully" Feb 13 15:27:30.713687 containerd[1510]: time="2025-02-13T15:27:30.713624756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:30.714275 containerd[1510]: time="2025-02-13T15:27:30.714244681Z" level=info msg="RemovePodSandbox \"ca009c11f4df0080747f89cf6f9acd6e79dc2196bd2518b87548eb3a4dd571c7\" returns successfully" Feb 13 15:27:32.625114 systemd[1]: run-containerd-runc-k8s.io-7308aa12c1e9e6e81cbf57f2d9f3fe8dea7e1de82c0c053f4b09eb5d45a11601-runc.RhJYuy.mount: Deactivated successfully. Feb 13 15:27:46.659238 kubelet[2787]: I0213 15:27:46.658722 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:27:48.385963 kubelet[2787]: I0213 15:27:48.385784 2787 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:16.515173 systemd[1]: run-containerd-runc-k8s.io-b3de0c64b65e3f0cbd2809e4a2c98bcca4c81bed6265fa2deafba02c60da6524-runc.KZ9Rkk.mount: Deactivated successfully. Feb 13 15:29:16.518304 systemd[1]: run-containerd-runc-k8s.io-b3de0c64b65e3f0cbd2809e4a2c98bcca4c81bed6265fa2deafba02c60da6524-runc.94UOLq.mount: Deactivated successfully. Feb 13 15:30:02.628026 systemd[1]: run-containerd-runc-k8s.io-7308aa12c1e9e6e81cbf57f2d9f3fe8dea7e1de82c0c053f4b09eb5d45a11601-runc.tHwMoU.mount: Deactivated successfully. Feb 13 15:31:02.623583 systemd[1]: run-containerd-runc-k8s.io-7308aa12c1e9e6e81cbf57f2d9f3fe8dea7e1de82c0c053f4b09eb5d45a11601-runc.dWGUxF.mount: Deactivated successfully. Feb 13 15:31:12.524140 systemd[1]: run-containerd-runc-k8s.io-7308aa12c1e9e6e81cbf57f2d9f3fe8dea7e1de82c0c053f4b09eb5d45a11601-runc.vN0Dvt.mount: Deactivated successfully. Feb 13 15:31:23.292328 systemd[1]: Started sshd@7-78.47.85.163:22-139.178.68.195:49480.service - OpenSSH per-connection server daemon (139.178.68.195:49480). Feb 13 15:31:24.297400 sshd[6286]: Accepted publickey for core from 139.178.68.195 port 49480 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:31:24.301371 sshd-session[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:24.310448 systemd-logind[1489]: New session 8 of user core. Feb 13 15:31:24.317141 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:31:25.084239 sshd[6288]: Connection closed by 139.178.68.195 port 49480 Feb 13 15:31:25.085266 sshd-session[6286]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:25.090608 systemd[1]: sshd@7-78.47.85.163:22-139.178.68.195:49480.service: Deactivated successfully. Feb 13 15:31:25.094848 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:31:25.095805 systemd-logind[1489]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:31:25.096704 systemd-logind[1489]: Removed session 8. Feb 13 15:31:30.261257 systemd[1]: Started sshd@8-78.47.85.163:22-139.178.68.195:37216.service - OpenSSH per-connection server daemon (139.178.68.195:37216). Feb 13 15:31:31.254231 sshd[6304]: Accepted publickey for core from 139.178.68.195 port 37216 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:31:31.256679 sshd-session[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:31.264787 systemd-logind[1489]: New session 9 of user core. Feb 13 15:31:31.272320 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:31:32.012628 sshd[6308]: Connection closed by 139.178.68.195 port 37216 Feb 13 15:31:32.013670 sshd-session[6304]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:32.018841 systemd[1]: sshd@8-78.47.85.163:22-139.178.68.195:37216.service: Deactivated successfully. Feb 13 15:31:32.021945 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:31:32.025179 systemd-logind[1489]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:31:32.028632 systemd-logind[1489]: Removed session 9. Feb 13 15:31:37.189267 systemd[1]: Started sshd@9-78.47.85.163:22-139.178.68.195:54234.service - OpenSSH per-connection server daemon (139.178.68.195:54234). Feb 13 15:31:38.176990 sshd[6339]: Accepted publickey for core from 139.178.68.195 port 54234 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:31:38.179240 sshd-session[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:38.185090 systemd-logind[1489]: New session 10 of user core. Feb 13 15:31:38.191241 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:31:38.950343 sshd[6341]: Connection closed by 139.178.68.195 port 54234 Feb 13 15:31:38.951743 sshd-session[6339]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:38.958108 systemd[1]: sshd@9-78.47.85.163:22-139.178.68.195:54234.service: Deactivated successfully. Feb 13 15:31:38.962686 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:31:38.964388 systemd-logind[1489]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:31:38.966755 systemd-logind[1489]: Removed session 10. Feb 13 15:31:39.132008 systemd[1]: Started sshd@10-78.47.85.163:22-139.178.68.195:54240.service - OpenSSH per-connection server daemon (139.178.68.195:54240). Feb 13 15:31:40.123987 sshd[6355]: Accepted publickey for core from 139.178.68.195 port 54240 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:31:40.125939 sshd-session[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:40.133291 systemd-logind[1489]: New session 11 of user core. Feb 13 15:31:40.138147 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:31:40.930619 sshd[6357]: Connection closed by 139.178.68.195 port 54240 Feb 13 15:31:40.932183 sshd-session[6355]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:40.937001 systemd-logind[1489]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:31:40.939332 systemd[1]: sshd@10-78.47.85.163:22-139.178.68.195:54240.service: Deactivated successfully. Feb 13 15:31:40.945865 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:31:40.947771 systemd-logind[1489]: Removed session 11. Feb 13 15:31:41.110331 systemd[1]: Started sshd@11-78.47.85.163:22-139.178.68.195:54246.service - OpenSSH per-connection server daemon (139.178.68.195:54246). Feb 13 15:31:42.102042 sshd[6367]: Accepted publickey for core from 139.178.68.195 port 54246 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:31:42.103305 sshd-session[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:42.109109 systemd-logind[1489]: New session 12 of user core. Feb 13 15:31:42.118208 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:31:42.861685 sshd[6369]: Connection closed by 139.178.68.195 port 54246 Feb 13 15:31:42.862595 sshd-session[6367]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:42.869644 systemd[1]: sshd@11-78.47.85.163:22-139.178.68.195:54246.service: Deactivated successfully. Feb 13 15:31:42.869758 systemd-logind[1489]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:31:42.872826 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:31:42.876375 systemd-logind[1489]: Removed session 12. Feb 13 15:31:48.046431 systemd[1]: Started sshd@12-78.47.85.163:22-139.178.68.195:48490.service - OpenSSH per-connection server daemon (139.178.68.195:48490). Feb 13 15:31:49.047704 sshd[6408]: Accepted publickey for core from 139.178.68.195 port 48490 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:31:49.050988 sshd-session[6408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:49.059862 systemd-logind[1489]: New session 13 of user core. Feb 13 15:31:49.065881 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:31:49.820180 sshd[6410]: Connection closed by 139.178.68.195 port 48490 Feb 13 15:31:49.821101 sshd-session[6408]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:49.827345 systemd[1]: sshd@12-78.47.85.163:22-139.178.68.195:48490.service: Deactivated successfully. Feb 13 15:31:49.831278 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:31:49.833094 systemd-logind[1489]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:31:49.835116 systemd-logind[1489]: Removed session 13. Feb 13 15:31:54.999609 systemd[1]: Started sshd@13-78.47.85.163:22-139.178.68.195:48498.service - OpenSSH per-connection server daemon (139.178.68.195:48498). Feb 13 15:31:55.988862 sshd[6433]: Accepted publickey for core from 139.178.68.195 port 48498 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:31:55.991768 sshd-session[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:55.997976 systemd-logind[1489]: New session 14 of user core. Feb 13 15:31:56.003224 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:31:56.748944 sshd[6435]: Connection closed by 139.178.68.195 port 48498 Feb 13 15:31:56.749570 sshd-session[6433]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:56.755872 systemd[1]: sshd@13-78.47.85.163:22-139.178.68.195:48498.service: Deactivated successfully. Feb 13 15:31:56.762736 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:31:56.764248 systemd-logind[1489]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:31:56.767422 systemd-logind[1489]: Removed session 14. Feb 13 15:31:56.930195 systemd[1]: Started sshd@14-78.47.85.163:22-139.178.68.195:32922.service - OpenSSH per-connection server daemon (139.178.68.195:32922). Feb 13 15:31:57.932159 sshd[6446]: Accepted publickey for core from 139.178.68.195 port 32922 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:31:57.934233 sshd-session[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:31:57.941707 systemd-logind[1489]: New session 15 of user core. Feb 13 15:31:57.950571 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:31:58.829596 sshd[6448]: Connection closed by 139.178.68.195 port 32922 Feb 13 15:31:58.831792 sshd-session[6446]: pam_unix(sshd:session): session closed for user core Feb 13 15:31:58.838844 systemd[1]: sshd@14-78.47.85.163:22-139.178.68.195:32922.service: Deactivated successfully. Feb 13 15:31:58.842521 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:31:58.844826 systemd-logind[1489]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:31:58.845792 systemd-logind[1489]: Removed session 15. Feb 13 15:31:59.016250 systemd[1]: Started sshd@15-78.47.85.163:22-139.178.68.195:32936.service - OpenSSH per-connection server daemon (139.178.68.195:32936). Feb 13 15:32:00.012288 sshd[6458]: Accepted publickey for core from 139.178.68.195 port 32936 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:32:00.014598 sshd-session[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:00.025399 systemd-logind[1489]: New session 16 of user core. Feb 13 15:32:00.032036 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:32:03.182995 sshd[6460]: Connection closed by 139.178.68.195 port 32936 Feb 13 15:32:03.183833 sshd-session[6458]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:03.189452 systemd-logind[1489]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:32:03.189990 systemd[1]: sshd@15-78.47.85.163:22-139.178.68.195:32936.service: Deactivated successfully. Feb 13 15:32:03.195207 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:32:03.195596 systemd[1]: session-16.scope: Consumed 608ms CPU time, 70.4M memory peak. Feb 13 15:32:03.200179 systemd-logind[1489]: Removed session 16. Feb 13 15:32:03.365624 systemd[1]: Started sshd@16-78.47.85.163:22-139.178.68.195:32942.service - OpenSSH per-connection server daemon (139.178.68.195:32942). Feb 13 15:32:04.348929 sshd[6503]: Accepted publickey for core from 139.178.68.195 port 32942 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:32:04.351325 sshd-session[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:04.357295 systemd-logind[1489]: New session 17 of user core. Feb 13 15:32:04.364369 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:32:05.239997 sshd[6505]: Connection closed by 139.178.68.195 port 32942 Feb 13 15:32:05.239818 sshd-session[6503]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:05.246154 systemd[1]: sshd@16-78.47.85.163:22-139.178.68.195:32942.service: Deactivated successfully. Feb 13 15:32:05.252206 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:32:05.255071 systemd-logind[1489]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:32:05.257413 systemd-logind[1489]: Removed session 17. Feb 13 15:32:05.422637 systemd[1]: Started sshd@17-78.47.85.163:22-139.178.68.195:32952.service - OpenSSH per-connection server daemon (139.178.68.195:32952). Feb 13 15:32:06.422973 sshd[6515]: Accepted publickey for core from 139.178.68.195 port 32952 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:32:06.425032 sshd-session[6515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:06.434474 systemd-logind[1489]: New session 18 of user core. Feb 13 15:32:06.439225 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:32:07.199768 sshd[6517]: Connection closed by 139.178.68.195 port 32952 Feb 13 15:32:07.201195 sshd-session[6515]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:07.206448 systemd-logind[1489]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:32:07.206714 systemd[1]: sshd@17-78.47.85.163:22-139.178.68.195:32952.service: Deactivated successfully. Feb 13 15:32:07.209774 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:32:07.212697 systemd-logind[1489]: Removed session 18. Feb 13 15:32:12.388410 systemd[1]: Started sshd@18-78.47.85.163:22-139.178.68.195:49050.service - OpenSSH per-connection server daemon (139.178.68.195:49050). Feb 13 15:32:13.383642 sshd[6532]: Accepted publickey for core from 139.178.68.195 port 49050 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:32:13.385894 sshd-session[6532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:13.394008 systemd-logind[1489]: New session 19 of user core. Feb 13 15:32:13.399208 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:32:14.148985 sshd[6554]: Connection closed by 139.178.68.195 port 49050 Feb 13 15:32:14.149442 sshd-session[6532]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:14.156448 systemd-logind[1489]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:32:14.157735 systemd[1]: sshd@18-78.47.85.163:22-139.178.68.195:49050.service: Deactivated successfully. Feb 13 15:32:14.159699 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:32:14.162410 systemd-logind[1489]: Removed session 19. Feb 13 15:32:19.320624 systemd[1]: Started sshd@19-78.47.85.163:22-139.178.68.195:56208.service - OpenSSH per-connection server daemon (139.178.68.195:56208). Feb 13 15:32:20.316971 sshd[6590]: Accepted publickey for core from 139.178.68.195 port 56208 ssh2: RSA SHA256:dDBYffbys7IwrjEqnD+nC8HZkuMa8NXLOQVKUB+uHPI Feb 13 15:32:20.318798 sshd-session[6590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:32:20.324722 systemd-logind[1489]: New session 20 of user core. Feb 13 15:32:20.327087 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:32:21.077138 sshd[6592]: Connection closed by 139.178.68.195 port 56208 Feb 13 15:32:21.078156 sshd-session[6590]: pam_unix(sshd:session): session closed for user core Feb 13 15:32:21.084325 systemd[1]: sshd@19-78.47.85.163:22-139.178.68.195:56208.service: Deactivated successfully. Feb 13 15:32:21.086518 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:32:21.087943 systemd-logind[1489]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:32:21.088851 systemd-logind[1489]: Removed session 20.