Feb 13 15:25:25.900583 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:25:25.900619 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025 Feb 13 15:25:25.900638 kernel: KASLR enabled Feb 13 15:25:25.900649 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 15:25:25.900660 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d98 Feb 13 15:25:25.900671 kernel: random: crng init done Feb 13 15:25:25.900684 kernel: secureboot: Secure boot disabled Feb 13 15:25:25.900696 kernel: ACPI: Early table checksum verification disabled Feb 13 15:25:25.900707 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 15:25:25.900721 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:25:25.900733 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:25:25.900745 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:25:25.900756 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:25:25.900768 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:25:25.900782 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:25:25.900797 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:25:25.900809 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:25:25.900822 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:25:25.900834 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:25:25.900847 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:25:25.900859 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 15:25:25.900871 kernel: NUMA: Failed to initialise from firmware Feb 13 15:25:25.900883 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:25:25.900895 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 15:25:25.900907 kernel: Zone ranges: Feb 13 15:25:25.900922 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 15:25:25.900934 kernel: DMA32 empty Feb 13 15:25:25.900946 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 15:25:25.900958 kernel: Movable zone start for each node Feb 13 15:25:25.900970 kernel: Early memory node ranges Feb 13 15:25:25.900982 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Feb 13 15:25:25.900995 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 15:25:25.901007 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 15:25:25.901019 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 15:25:25.901031 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 15:25:25.901044 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 15:25:25.901056 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 15:25:25.901070 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 15:25:25.901082 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 15:25:25.901095 kernel: psci: probing for conduit method from ACPI. Feb 13 15:25:25.901112 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:25:25.901125 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:25:25.901138 kernel: psci: Trusted OS migration not required Feb 13 15:25:25.901153 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:25:25.901166 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:25:25.901180 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:25:25.901193 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:25:25.901206 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 15:25:25.901219 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:25:25.901232 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:25:25.901245 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:25:25.901258 kernel: CPU features: detected: Spectre-v4 Feb 13 15:25:25.901287 kernel: CPU features: detected: Spectre-BHB Feb 13 15:25:25.901326 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:25:25.901340 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:25:25.901353 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:25:25.901366 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:25:25.901379 kernel: alternatives: applying boot alternatives Feb 13 15:25:25.901395 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:25:25.901409 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:25:25.901423 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:25:25.901437 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:25:25.901450 kernel: Fallback order for Node 0: 0 Feb 13 15:25:25.901463 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 15:25:25.901479 kernel: Policy zone: Normal Feb 13 15:25:25.901492 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:25:25.901505 kernel: software IO TLB: area num 2. Feb 13 15:25:25.901518 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 15:25:25.901532 kernel: Memory: 3882680K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 213320K reserved, 0K cma-reserved) Feb 13 15:25:25.901545 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:25:25.901574 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:25:25.901589 kernel: rcu: RCU event tracing is enabled. Feb 13 15:25:25.901602 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:25:25.901623 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:25:25.901636 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:25:25.901650 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:25:25.901666 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:25:25.901679 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:25:25.901692 kernel: GICv3: 256 SPIs implemented Feb 13 15:25:25.901704 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:25:25.901727 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:25:25.901740 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:25:25.901753 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:25:25.901766 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:25:25.901780 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:25:25.901793 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:25:25.901806 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 15:25:25.901822 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 15:25:25.901836 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:25:25.901849 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:25:25.901862 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:25:25.901875 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:25:25.901889 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:25:25.901903 kernel: Console: colour dummy device 80x25 Feb 13 15:25:25.901916 kernel: ACPI: Core revision 20230628 Feb 13 15:25:25.901931 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:25:25.901951 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:25:25.901967 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:25:25.901981 kernel: landlock: Up and running. Feb 13 15:25:25.901994 kernel: SELinux: Initializing. Feb 13 15:25:25.902007 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:25:25.902021 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:25:25.902034 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:25:25.902048 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:25:25.902061 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:25:25.902075 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:25:25.902093 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:25:25.902107 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:25:25.902121 kernel: Remapping and enabling EFI services. Feb 13 15:25:25.902135 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:25:25.902148 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:25:25.902162 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:25:25.902175 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 15:25:25.902189 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:25:25.902202 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:25:25.902216 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:25:25.902232 kernel: SMP: Total of 2 processors activated. Feb 13 15:25:25.902245 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:25:25.903334 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:25:25.903373 kernel: CPU features: detected: Common not Private translations Feb 13 15:25:25.903388 kernel: CPU features: detected: CRC32 instructions Feb 13 15:25:25.903402 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:25:25.903416 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:25:25.903431 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:25:25.903445 kernel: CPU features: detected: Privileged Access Never Feb 13 15:25:25.903462 kernel: CPU features: detected: RAS Extension Support Feb 13 15:25:25.903476 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:25:25.903491 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:25:25.903505 kernel: alternatives: applying system-wide alternatives Feb 13 15:25:25.903519 kernel: devtmpfs: initialized Feb 13 15:25:25.903533 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:25:25.903548 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:25:25.903584 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:25:25.903598 kernel: SMBIOS 3.0.0 present. Feb 13 15:25:25.903613 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 15:25:25.903627 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:25:25.903641 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:25:25.903656 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:25:25.903670 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:25:25.903684 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:25:25.903699 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Feb 13 15:25:25.903715 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:25:25.903729 kernel: cpuidle: using governor menu Feb 13 15:25:25.903744 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:25:25.903758 kernel: ASID allocator initialised with 32768 entries Feb 13 15:25:25.903772 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:25:25.903786 kernel: Serial: AMBA PL011 UART driver Feb 13 15:25:25.903800 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:25:25.903866 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:25:25.903888 kernel: Modules: 508960 pages in range for PLT usage Feb 13 15:25:25.903903 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:25:25.903922 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:25:25.903936 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:25:25.903950 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:25:25.903965 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:25:25.903979 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:25:25.903993 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:25:25.904007 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:25:25.904020 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:25:25.904035 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:25:25.904051 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:25:25.904066 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:25:25.904080 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:25:25.904094 kernel: ACPI: Interpreter enabled Feb 13 15:25:25.904108 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:25:25.904123 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:25:25.904137 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:25:25.904151 kernel: printk: console [ttyAMA0] enabled Feb 13 15:25:25.904166 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:25:25.904439 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:25:25.904608 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:25:25.904762 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:25:25.904888 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:25:25.905008 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:25:25.905027 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:25:25.905042 kernel: PCI host bridge to bus 0000:00 Feb 13 15:25:25.905190 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:25:25.906460 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:25:25.906658 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:25:25.906777 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:25:25.906927 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:25:25.907067 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 15:25:25.907221 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 15:25:25.908582 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:25:25.908764 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:25:25.908894 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 15:25:25.909135 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 15:25:25.909384 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 15:25:25.909577 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 15:25:25.909785 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 15:25:25.909925 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 15:25:25.910049 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 15:25:25.910196 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 15:25:25.911447 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 15:25:25.911652 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 15:25:25.911792 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 15:25:25.911924 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 15:25:25.912046 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 15:25:25.912190 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 15:25:25.913415 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 15:25:25.913508 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 15:25:25.913621 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 15:25:25.913716 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 15:25:25.913793 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 15:25:25.913887 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:25:25.913969 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 15:25:25.914048 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:25:25.914127 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:25:25.914212 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 15:25:25.916376 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 15:25:25.916472 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 15:25:25.916592 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 15:25:25.916678 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 15:25:25.916760 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 15:25:25.916840 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 15:25:25.916928 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 15:25:25.918317 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 15:25:25.918425 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 15:25:25.918508 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 15:25:25.918596 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 15:25:25.918714 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:25:25.918795 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 15:25:25.918863 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 15:25:25.918929 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 15:25:25.918995 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 15:25:25.919081 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 15:25:25.919161 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:25:25.919240 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 15:25:25.919337 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 15:25:25.919404 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 15:25:25.919466 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 15:25:25.919534 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 15:25:25.919653 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:25:25.919731 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 15:25:25.919816 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 15:25:25.919904 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 15:25:25.919979 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 15:25:25.920059 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 15:25:25.920122 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:25:25.920184 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 15:25:25.920260 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 15:25:25.920410 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:25:25.920476 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 15:25:25.920560 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 15:25:25.920627 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:25:25.920691 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 15:25:25.920757 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 15:25:25.920820 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:25:25.920886 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 15:25:25.920952 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 15:25:25.921014 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:25:25.921075 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 15:25:25.921138 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 15:25:25.921200 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:25:25.921298 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 15:25:25.921369 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:25:25.921437 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 15:25:25.921500 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:25:25.921576 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 15:25:25.921649 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:25:25.921722 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 15:25:25.921789 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:25:25.921860 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 15:25:25.921924 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:25:25.922005 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 15:25:25.922070 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:25:25.922143 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 15:25:25.922209 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:25:25.922320 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 15:25:25.922404 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:25:25.922471 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 15:25:25.922537 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 15:25:25.922658 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 15:25:25.922724 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 15:25:25.922788 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 15:25:25.923930 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 15:25:25.924038 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 15:25:25.924110 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 15:25:25.924191 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 15:25:25.924365 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 15:25:25.924454 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 15:25:25.924525 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 15:25:25.924605 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 15:25:25.924683 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 15:25:25.924750 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 15:25:25.924818 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 15:25:25.924895 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 15:25:25.924960 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 15:25:25.925036 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 15:25:25.925102 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 15:25:25.925181 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 15:25:25.925257 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 15:25:25.926593 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:25:25.926700 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 15:25:25.926777 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 15:25:25.926851 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 15:25:25.926921 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 15:25:25.927009 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:25:25.927103 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 15:25:25.927190 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 15:25:25.927264 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 15:25:25.927384 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 15:25:25.927458 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:25:25.927541 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 15:25:25.927666 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 15:25:25.927761 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 15:25:25.927832 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 15:25:25.927910 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 15:25:25.927981 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:25:25.928068 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 15:25:25.928137 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 15:25:25.928213 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 15:25:25.930379 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 15:25:25.930504 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:25:25.930648 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 15:25:25.930729 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 15:25:25.930801 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 15:25:25.930864 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 15:25:25.930925 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 15:25:25.930994 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:25:25.931078 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 15:25:25.931150 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 15:25:25.931215 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 15:25:25.931821 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 15:25:25.931925 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 15:25:25.932003 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:25:25.932076 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 15:25:25.932142 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 15:25:25.932206 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 15:25:25.932304 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 15:25:25.932380 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 15:25:25.932444 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 15:25:25.932515 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:25:25.932597 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 15:25:25.932661 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 15:25:25.932728 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 15:25:25.932798 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:25:25.932867 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 15:25:25.932935 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 15:25:25.932999 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 15:25:25.933061 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:25:25.933125 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:25:25.933182 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:25:25.933246 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:25:25.933374 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 15:25:25.933439 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 15:25:25.933508 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 15:25:25.933624 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 15:25:25.933693 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 15:25:25.933757 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 15:25:25.933841 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 15:25:25.933917 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 15:25:25.934000 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 15:25:25.934073 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 15:25:25.934139 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 15:25:25.934198 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 15:25:25.934263 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 15:25:25.934375 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 15:25:25.934448 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 15:25:25.934516 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 15:25:25.934602 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 15:25:25.934666 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 15:25:25.934731 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 15:25:25.934796 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 15:25:25.934863 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 15:25:25.934937 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 15:25:25.934996 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 15:25:25.935054 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 15:25:25.935121 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 15:25:25.935179 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 15:25:25.935237 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 15:25:25.935247 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:25:25.935255 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:25:25.935263 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:25:25.935285 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:25:25.935293 kernel: iommu: Default domain type: Translated Feb 13 15:25:25.935303 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:25:25.935311 kernel: efivars: Registered efivars operations Feb 13 15:25:25.935318 kernel: vgaarb: loaded Feb 13 15:25:25.935326 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:25:25.935334 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:25:25.935346 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:25:25.935354 kernel: pnp: PnP ACPI init Feb 13 15:25:25.935445 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:25:25.935465 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:25:25.935476 kernel: NET: Registered PF_INET protocol family Feb 13 15:25:25.935484 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:25:25.935494 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:25:25.935502 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:25:25.935510 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:25:25.935518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:25:25.935526 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:25:25.935533 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:25:25.935543 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:25:25.935559 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:25:25.935641 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 15:25:25.935652 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:25:25.935660 kernel: kvm [1]: HYP mode not available Feb 13 15:25:25.935667 kernel: Initialise system trusted keyrings Feb 13 15:25:25.935682 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:25:25.935690 kernel: Key type asymmetric registered Feb 13 15:25:25.935698 kernel: Asymmetric key parser 'x509' registered Feb 13 15:25:25.935708 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:25:25.935716 kernel: io scheduler mq-deadline registered Feb 13 15:25:25.935724 kernel: io scheduler kyber registered Feb 13 15:25:25.935731 kernel: io scheduler bfq registered Feb 13 15:25:25.935739 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 15:25:25.935823 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 15:25:25.935891 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 15:25:25.935954 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:25:25.936038 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 15:25:25.936114 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 15:25:25.936183 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:25:25.936250 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 15:25:25.936331 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 15:25:25.936396 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:25:25.936465 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 15:25:25.936529 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 15:25:25.936632 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:25:25.936700 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 15:25:25.936774 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 15:25:25.936845 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:25:25.936923 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 15:25:25.936988 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 15:25:25.937053 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:25:25.937118 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 15:25:25.937181 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 15:25:25.937244 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:25:25.937347 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 15:25:25.937417 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 15:25:25.937492 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:25:25.937503 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 15:25:25.937580 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 15:25:25.937656 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 15:25:25.937725 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 15:25:25.937735 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:25:25.937743 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:25:25.937751 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:25:25.937820 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 15:25:25.937889 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 15:25:25.937900 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:25:25.937908 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 15:25:25.937984 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 15:25:25.937998 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 15:25:25.938006 kernel: thunder_xcv, ver 1.0 Feb 13 15:25:25.938014 kernel: thunder_bgx, ver 1.0 Feb 13 15:25:25.938021 kernel: nicpf, ver 1.0 Feb 13 15:25:25.938029 kernel: nicvf, ver 1.0 Feb 13 15:25:25.938105 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:25:25.938173 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:25:25 UTC (1739460325) Feb 13 15:25:25.938183 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:25:25.938193 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:25:25.938201 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:25:25.938209 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:25:25.938218 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:25:25.938226 kernel: Segment Routing with IPv6 Feb 13 15:25:25.938238 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:25:25.938246 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:25:25.938254 kernel: Key type dns_resolver registered Feb 13 15:25:25.938261 kernel: registered taskstats version 1 Feb 13 15:25:25.938303 kernel: Loading compiled-in X.509 certificates Feb 13 15:25:25.938311 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51' Feb 13 15:25:25.938318 kernel: Key type .fscrypt registered Feb 13 15:25:25.938326 kernel: Key type fscrypt-provisioning registered Feb 13 15:25:25.938333 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:25:25.938341 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:25:25.938349 kernel: ima: No architecture policies found Feb 13 15:25:25.938356 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:25:25.938366 kernel: clk: Disabling unused clocks Feb 13 15:25:25.938381 kernel: Freeing unused kernel memory: 39680K Feb 13 15:25:25.938389 kernel: Run /init as init process Feb 13 15:25:25.938397 kernel: with arguments: Feb 13 15:25:25.938404 kernel: /init Feb 13 15:25:25.938412 kernel: with environment: Feb 13 15:25:25.938419 kernel: HOME=/ Feb 13 15:25:25.938426 kernel: TERM=linux Feb 13 15:25:25.938434 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:25:25.938443 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:25:25.938455 systemd[1]: Detected virtualization kvm. Feb 13 15:25:25.938464 systemd[1]: Detected architecture arm64. Feb 13 15:25:25.938472 systemd[1]: Running in initrd. Feb 13 15:25:25.938480 systemd[1]: No hostname configured, using default hostname. Feb 13 15:25:25.938487 systemd[1]: Hostname set to . Feb 13 15:25:25.938496 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:25:25.938505 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:25:25.938513 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:25:25.938521 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:25:25.938530 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:25:25.938538 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:25:25.938559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:25:25.938574 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:25:25.938585 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:25:25.938596 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:25:25.938604 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:25:25.938612 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:25:25.938620 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:25:25.938628 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:25:25.938636 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:25:25.938644 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:25:25.938652 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:25:25.938661 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:25:25.938670 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:25:25.938678 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:25:25.938686 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:25:25.938694 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:25:25.938702 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:25:25.938710 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:25:25.938718 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:25:25.938728 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:25:25.938736 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:25:25.938744 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:25:25.938752 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:25:25.938761 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:25:25.938769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:25:25.938777 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:25:25.938785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:25:25.938821 systemd-journald[237]: Collecting audit messages is disabled. Feb 13 15:25:25.938843 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:25:25.938854 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:25:25.938862 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:25:25.938871 kernel: Bridge firewalling registered Feb 13 15:25:25.938880 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:25:25.938889 systemd-journald[237]: Journal started Feb 13 15:25:25.938917 systemd-journald[237]: Runtime Journal (/run/log/journal/062c6fa7334a456cb4cf74fe3b59afdb) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:25:25.939786 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:25:25.896418 systemd-modules-load[238]: Inserted module 'overlay' Feb 13 15:25:25.919018 systemd-modules-load[238]: Inserted module 'br_netfilter' Feb 13 15:25:25.943316 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:25:25.945072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:25:25.946861 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:25:25.959460 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:25:25.961647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:25:25.965818 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:25:25.969340 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:25:25.982037 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:25:25.988283 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:25:25.993434 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:25:25.994458 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:25:25.999849 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:25:26.011700 dracut-cmdline[271]: dracut-dracut-053 Feb 13 15:25:26.014261 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6 Feb 13 15:25:26.034410 systemd-resolved[273]: Positive Trust Anchors: Feb 13 15:25:26.034508 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:25:26.034542 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:25:26.046028 systemd-resolved[273]: Defaulting to hostname 'linux'. Feb 13 15:25:26.047578 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:25:26.048736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:25:26.100331 kernel: SCSI subsystem initialized Feb 13 15:25:26.105301 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:25:26.112307 kernel: iscsi: registered transport (tcp) Feb 13 15:25:26.126321 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:25:26.126421 kernel: QLogic iSCSI HBA Driver Feb 13 15:25:26.176682 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:25:26.183493 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:25:26.203331 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:25:26.203401 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:25:26.204294 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:25:26.255329 kernel: raid6: neonx8 gen() 15672 MB/s Feb 13 15:25:26.272330 kernel: raid6: neonx4 gen() 15552 MB/s Feb 13 15:25:26.289326 kernel: raid6: neonx2 gen() 13144 MB/s Feb 13 15:25:26.306336 kernel: raid6: neonx1 gen() 10423 MB/s Feb 13 15:25:26.323321 kernel: raid6: int64x8 gen() 6871 MB/s Feb 13 15:25:26.340334 kernel: raid6: int64x4 gen() 7305 MB/s Feb 13 15:25:26.357341 kernel: raid6: int64x2 gen() 6090 MB/s Feb 13 15:25:26.374347 kernel: raid6: int64x1 gen() 5011 MB/s Feb 13 15:25:26.374437 kernel: raid6: using algorithm neonx8 gen() 15672 MB/s Feb 13 15:25:26.391383 kernel: raid6: .... xor() 11835 MB/s, rmw enabled Feb 13 15:25:26.391495 kernel: raid6: using neon recovery algorithm Feb 13 15:25:26.396406 kernel: xor: measuring software checksum speed Feb 13 15:25:26.396489 kernel: 8regs : 18969 MB/sec Feb 13 15:25:26.396512 kernel: 32regs : 19641 MB/sec Feb 13 15:25:26.396563 kernel: arm64_neon : 26848 MB/sec Feb 13 15:25:26.397313 kernel: xor: using function: arm64_neon (26848 MB/sec) Feb 13 15:25:26.448344 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:25:26.466443 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:25:26.474913 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:25:26.513097 systemd-udevd[455]: Using default interface naming scheme 'v255'. Feb 13 15:25:26.517368 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:25:26.529463 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:25:26.544351 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Feb 13 15:25:26.583589 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:25:26.590569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:25:26.651595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:25:26.660742 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:25:26.677992 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:25:26.679949 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:25:26.681482 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:25:26.683125 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:25:26.689474 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:25:26.715978 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:25:26.750480 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:25:26.757340 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:25:26.757601 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 15:25:26.775329 kernel: ACPI: bus type USB registered Feb 13 15:25:26.775380 kernel: usbcore: registered new interface driver usbfs Feb 13 15:25:26.775390 kernel: usbcore: registered new interface driver hub Feb 13 15:25:26.776343 kernel: usbcore: registered new device driver usb Feb 13 15:25:26.801298 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 15:25:26.802769 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 15:25:26.802884 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:25:26.802897 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:25:26.804187 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:25:26.804326 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:25:26.808838 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:25:26.832453 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 15:25:26.832610 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 15:25:26.832714 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 15:25:26.832816 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 15:25:26.832894 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 15:25:26.832989 kernel: hub 1-0:1.0: USB hub found Feb 13 15:25:26.833094 kernel: hub 1-0:1.0: 4 ports detected Feb 13 15:25:26.833169 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 15:25:26.833763 kernel: hub 2-0:1.0: USB hub found Feb 13 15:25:26.833905 kernel: hub 2-0:1.0: 4 ports detected Feb 13 15:25:26.808725 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:25:26.838188 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 15:25:26.844467 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 15:25:26.844892 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 15:25:26.845020 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 15:25:26.845100 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:25:26.845185 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:25:26.845196 kernel: GPT:17805311 != 80003071 Feb 13 15:25:26.845204 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:25:26.845214 kernel: GPT:17805311 != 80003071 Feb 13 15:25:26.845223 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:25:26.845232 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:25:26.845242 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 15:25:26.810601 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:25:26.810759 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:25:26.814167 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:25:26.824577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:25:26.849612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:25:26.854517 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:25:26.876209 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:25:26.897302 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (527) Feb 13 15:25:26.900667 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (503) Feb 13 15:25:26.909431 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 15:25:26.914754 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 15:25:26.923883 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:25:26.930458 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 15:25:26.931118 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 15:25:26.936495 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:25:26.946241 disk-uuid[573]: Primary Header is updated. Feb 13 15:25:26.946241 disk-uuid[573]: Secondary Entries is updated. Feb 13 15:25:26.946241 disk-uuid[573]: Secondary Header is updated. Feb 13 15:25:27.059372 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 15:25:27.303457 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 15:25:27.441539 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 15:25:27.441587 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 15:25:27.443107 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 15:25:27.498052 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 15:25:27.498489 kernel: usbcore: registered new interface driver usbhid Feb 13 15:25:27.498517 kernel: usbhid: USB HID core driver Feb 13 15:25:27.962327 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:25:27.962968 disk-uuid[574]: The operation has completed successfully. Feb 13 15:25:28.013673 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:25:28.014401 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:25:28.032639 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:25:28.036024 sh[582]: Success Feb 13 15:25:28.048471 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:25:28.102563 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:25:28.111449 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:25:28.112191 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:25:28.144863 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 Feb 13 15:25:28.144923 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:25:28.144936 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:25:28.144949 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:25:28.145658 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:25:28.152318 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:25:28.155230 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:25:28.157221 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:25:28.164547 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:25:28.168492 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:25:28.177698 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:25:28.177750 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:25:28.177761 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:25:28.181328 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:25:28.181394 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:25:28.193217 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:25:28.194412 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:25:28.201889 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:25:28.207490 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:25:28.311440 ignition[666]: Ignition 2.20.0 Feb 13 15:25:28.311451 ignition[666]: Stage: fetch-offline Feb 13 15:25:28.311489 ignition[666]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:25:28.311497 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:25:28.311704 ignition[666]: parsed url from cmdline: "" Feb 13 15:25:28.311708 ignition[666]: no config URL provided Feb 13 15:25:28.311713 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:25:28.316102 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:25:28.311721 ignition[666]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:25:28.311726 ignition[666]: failed to fetch config: resource requires networking Feb 13 15:25:28.311902 ignition[666]: Ignition finished successfully Feb 13 15:25:28.330231 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:25:28.337450 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:25:28.357663 systemd-networkd[770]: lo: Link UP Feb 13 15:25:28.357677 systemd-networkd[770]: lo: Gained carrier Feb 13 15:25:28.360576 systemd-networkd[770]: Enumeration completed Feb 13 15:25:28.362027 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:25:28.362032 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:25:28.362065 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:25:28.363594 systemd-networkd[770]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:25:28.363597 systemd-networkd[770]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:25:28.364076 systemd-networkd[770]: eth0: Link UP Feb 13 15:25:28.364079 systemd-networkd[770]: eth0: Gained carrier Feb 13 15:25:28.364085 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:25:28.364428 systemd[1]: Reached target network.target - Network. Feb 13 15:25:28.369814 systemd-networkd[770]: eth1: Link UP Feb 13 15:25:28.369818 systemd-networkd[770]: eth1: Gained carrier Feb 13 15:25:28.369828 systemd-networkd[770]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:25:28.372477 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:25:28.386143 ignition[772]: Ignition 2.20.0 Feb 13 15:25:28.386155 ignition[772]: Stage: fetch Feb 13 15:25:28.386365 ignition[772]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:25:28.386375 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:25:28.386456 ignition[772]: parsed url from cmdline: "" Feb 13 15:25:28.386460 ignition[772]: no config URL provided Feb 13 15:25:28.386464 ignition[772]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:25:28.386471 ignition[772]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:25:28.386597 ignition[772]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 15:25:28.387432 ignition[772]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 15:25:28.392329 systemd-networkd[770]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:25:28.424383 systemd-networkd[770]: eth0: DHCPv4 address 188.245.200.94/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:25:28.588071 ignition[772]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 15:25:28.593596 ignition[772]: GET result: OK Feb 13 15:25:28.593755 ignition[772]: parsing config with SHA512: 244c0f9ffc3adc32d14ac82aee470ccc244ff8b006efe26b5316609ca3b1469aac6f59d81a0f33c0ec0d6832f9541bab9651b7af4a2be13a82d67cd0f89fb466 Feb 13 15:25:28.602251 unknown[772]: fetched base config from "system" Feb 13 15:25:28.602261 unknown[772]: fetched base config from "system" Feb 13 15:25:28.602659 ignition[772]: fetch: fetch complete Feb 13 15:25:28.602284 unknown[772]: fetched user config from "hetzner" Feb 13 15:25:28.602665 ignition[772]: fetch: fetch passed Feb 13 15:25:28.605704 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:25:28.602708 ignition[772]: Ignition finished successfully Feb 13 15:25:28.615702 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:25:28.631003 ignition[779]: Ignition 2.20.0 Feb 13 15:25:28.631012 ignition[779]: Stage: kargs Feb 13 15:25:28.631183 ignition[779]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:25:28.631192 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:25:28.632239 ignition[779]: kargs: kargs passed Feb 13 15:25:28.632307 ignition[779]: Ignition finished successfully Feb 13 15:25:28.636345 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:25:28.650629 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:25:28.663514 ignition[786]: Ignition 2.20.0 Feb 13 15:25:28.663538 ignition[786]: Stage: disks Feb 13 15:25:28.663738 ignition[786]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:25:28.663753 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:25:28.666550 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:25:28.664817 ignition[786]: disks: disks passed Feb 13 15:25:28.669083 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:25:28.664874 ignition[786]: Ignition finished successfully Feb 13 15:25:28.669840 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:25:28.670658 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:25:28.671612 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:25:28.672871 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:25:28.679482 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:25:28.696627 systemd-fsck[795]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:25:28.701233 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:25:28.708939 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:25:28.758308 kernel: EXT4-fs (sda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none. Feb 13 15:25:28.758662 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:25:28.759650 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:25:28.776504 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:25:28.780045 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:25:28.786308 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 15:25:28.788594 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:25:28.789862 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:25:28.792397 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:25:28.795334 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (803) Feb 13 15:25:28.795365 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:25:28.795376 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:25:28.795929 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:25:28.800288 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:25:28.804449 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:25:28.804502 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:25:28.812770 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:25:28.861930 initrd-setup-root[831]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:25:28.865394 coreos-metadata[805]: Feb 13 15:25:28.865 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 15:25:28.869236 coreos-metadata[805]: Feb 13 15:25:28.869 INFO Fetch successful Feb 13 15:25:28.869236 coreos-metadata[805]: Feb 13 15:25:28.869 INFO wrote hostname ci-4152-2-1-1-73ff0440f7 to /sysroot/etc/hostname Feb 13 15:25:28.872547 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:25:28.873772 initrd-setup-root[838]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:25:28.876728 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:25:28.880200 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:25:28.981721 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:25:28.991537 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:25:28.997697 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:25:29.005308 kernel: BTRFS info (device sda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:25:29.027070 ignition[921]: INFO : Ignition 2.20.0 Feb 13 15:25:29.027070 ignition[921]: INFO : Stage: mount Feb 13 15:25:29.028232 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:25:29.028232 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:25:29.030678 ignition[921]: INFO : mount: mount passed Feb 13 15:25:29.030678 ignition[921]: INFO : Ignition finished successfully Feb 13 15:25:29.030208 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:25:29.037433 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:25:29.039208 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:25:29.144413 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:25:29.152586 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:25:29.163314 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (933) Feb 13 15:25:29.164386 kernel: BTRFS info (device sda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643 Feb 13 15:25:29.164426 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:25:29.164450 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:25:29.167501 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:25:29.167582 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:25:29.171377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:25:29.209316 ignition[950]: INFO : Ignition 2.20.0 Feb 13 15:25:29.209316 ignition[950]: INFO : Stage: files Feb 13 15:25:29.212094 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:25:29.212094 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:25:29.212094 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:25:29.215048 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:25:29.215048 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:25:29.217477 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:25:29.218679 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:25:29.218679 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:25:29.218041 unknown[950]: wrote ssh authorized keys file for user: core Feb 13 15:25:29.220916 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:25:29.220916 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 15:25:29.289989 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:25:29.472572 systemd-networkd[770]: eth1: Gained IPv6LL Feb 13 15:25:29.578015 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:25:29.580556 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:25:30.131424 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:25:30.432961 systemd-networkd[770]: eth0: Gained IPv6LL Feb 13 15:25:30.762391 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:25:30.762391 ignition[950]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:25:30.765807 ignition[950]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:25:30.765807 ignition[950]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:25:30.765807 ignition[950]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:25:30.765807 ignition[950]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 15:25:30.765807 ignition[950]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:25:30.765807 ignition[950]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 15:25:30.765807 ignition[950]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 15:25:30.765807 ignition[950]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:25:30.765807 ignition[950]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:25:30.765807 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:25:30.765807 ignition[950]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:25:30.765807 ignition[950]: INFO : files: files passed Feb 13 15:25:30.765807 ignition[950]: INFO : Ignition finished successfully Feb 13 15:25:30.768793 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:25:30.776576 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:25:30.779528 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:25:30.782444 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:25:30.784350 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:25:30.794032 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:25:30.794032 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:25:30.796254 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:25:30.799438 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:25:30.800541 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:25:30.810579 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:25:30.852451 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:25:30.852580 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:25:30.853724 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:25:30.854756 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:25:30.855939 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:25:30.858077 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:25:30.878323 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:25:30.885506 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:25:30.900934 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:25:30.901712 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:25:30.902946 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:25:30.904689 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:25:30.904828 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:25:30.906465 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:25:30.907037 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:25:30.907985 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:25:30.908990 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:25:30.910144 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:25:30.911248 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:25:30.912412 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:25:30.913537 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:25:30.914648 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:25:30.915609 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:25:30.916447 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:25:30.916588 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:25:30.917829 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:25:30.918450 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:25:30.919454 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:25:30.919566 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:25:30.920571 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:25:30.920689 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:25:30.922233 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:25:30.922370 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:25:30.923828 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:25:30.923938 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:25:30.924966 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 15:25:30.925066 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 15:25:30.939730 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:25:30.941128 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:25:30.941437 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:25:30.944535 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:25:30.947932 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:25:30.948542 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:25:30.951477 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:25:30.951675 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:25:30.959992 ignition[1003]: INFO : Ignition 2.20.0 Feb 13 15:25:30.959992 ignition[1003]: INFO : Stage: umount Feb 13 15:25:30.961649 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:25:30.961649 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 15:25:30.961649 ignition[1003]: INFO : umount: umount passed Feb 13 15:25:30.961649 ignition[1003]: INFO : Ignition finished successfully Feb 13 15:25:30.963056 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:25:30.963819 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:25:30.965189 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:25:30.967525 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:25:30.971338 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:25:30.971525 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:25:30.976254 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:25:30.976374 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:25:30.977103 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:25:30.977149 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:25:30.979759 systemd[1]: Stopped target network.target - Network. Feb 13 15:25:30.980398 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:25:30.980455 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:25:30.981057 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:25:30.983741 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:25:30.987355 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:25:30.988178 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:25:30.988761 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:25:30.992947 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:25:30.993036 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:25:30.994159 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:25:30.994223 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:25:30.997822 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:25:30.997908 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:25:30.998663 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:25:30.998712 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:25:30.999730 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:25:31.002522 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:25:31.003335 systemd-networkd[770]: eth0: DHCPv6 lease lost Feb 13 15:25:31.005862 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:25:31.009369 systemd-networkd[770]: eth1: DHCPv6 lease lost Feb 13 15:25:31.011530 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:25:31.013109 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:25:31.014849 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:25:31.014928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:25:31.017381 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:25:31.017452 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:25:31.018132 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:25:31.018182 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:25:31.024586 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:25:31.025089 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:25:31.025158 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:25:31.030421 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:25:31.032903 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:25:31.033027 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:25:31.046016 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:25:31.046116 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:25:31.046808 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:25:31.046851 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:25:31.048802 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:25:31.048849 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:25:31.049857 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:25:31.049965 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:25:31.055114 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:25:31.055352 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:25:31.056995 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:25:31.057053 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:25:31.058023 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:25:31.058055 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:25:31.058726 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:25:31.058772 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:25:31.059922 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:25:31.059970 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:25:31.061543 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:25:31.061587 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:25:31.072071 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:25:31.073335 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:25:31.073431 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:25:31.074727 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:25:31.074795 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:25:31.082549 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:25:31.083402 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:25:31.085207 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:25:31.093676 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:25:31.101282 systemd[1]: Switching root. Feb 13 15:25:31.135187 systemd-journald[237]: Journal stopped Feb 13 15:25:31.925018 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Feb 13 15:25:31.925076 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:25:31.925092 kernel: SELinux: policy capability open_perms=1 Feb 13 15:25:31.925101 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:25:31.925111 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:25:31.925123 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:25:31.925132 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:25:31.925142 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:25:31.925155 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:25:31.925165 kernel: audit: type=1403 audit(1739460331.261:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:25:31.925175 systemd[1]: Successfully loaded SELinux policy in 34.739ms. Feb 13 15:25:31.925195 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.489ms. Feb 13 15:25:31.925206 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:25:31.925216 systemd[1]: Detected virtualization kvm. Feb 13 15:25:31.925228 systemd[1]: Detected architecture arm64. Feb 13 15:25:31.925238 systemd[1]: Detected first boot. Feb 13 15:25:31.925248 systemd[1]: Hostname set to . Feb 13 15:25:31.925260 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:25:31.925410 zram_generator::config[1045]: No configuration found. Feb 13 15:25:31.925624 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:25:31.925807 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:25:31.925829 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:25:31.925840 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:25:31.925851 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:25:31.925865 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:25:31.925875 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:25:31.925889 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:25:31.925900 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:25:31.925910 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:25:31.925921 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:25:31.925933 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:25:31.925944 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:25:31.925955 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:25:31.925969 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:25:31.925979 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:25:31.925989 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:25:31.926000 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:25:31.926011 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:25:31.926021 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:25:31.926033 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:25:31.926043 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:25:31.926054 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:25:31.926069 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:25:31.926079 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:25:31.926090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:25:31.926102 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:25:31.926113 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:25:31.926123 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:25:31.926134 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:25:31.926144 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:25:31.926155 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:25:31.926165 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:25:31.926175 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:25:31.926185 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:25:31.926198 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:25:31.926209 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:25:31.926219 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:25:31.926229 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:25:31.926239 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:25:31.926250 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:25:31.926260 systemd[1]: Reached target machines.target - Containers. Feb 13 15:25:31.926281 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:25:31.926292 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:25:31.926306 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:25:31.926321 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:25:31.926331 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:25:31.926342 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:25:31.926352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:25:31.926365 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:25:31.926376 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:25:31.926387 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:25:31.926398 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:25:31.926407 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:25:31.926418 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:25:31.926428 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:25:31.926438 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:25:31.926449 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:25:31.926460 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:25:31.926471 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:25:31.926520 systemd-journald[1108]: Collecting audit messages is disabled. Feb 13 15:25:31.926543 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:25:31.926555 systemd-journald[1108]: Journal started Feb 13 15:25:31.926577 systemd-journald[1108]: Runtime Journal (/run/log/journal/062c6fa7334a456cb4cf74fe3b59afdb) is 8.0M, max 76.6M, 68.6M free. Feb 13 15:25:31.741458 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:25:31.760776 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:25:31.761458 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:25:31.929286 kernel: loop: module loaded Feb 13 15:25:31.930358 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:25:31.930404 systemd[1]: Stopped verity-setup.service. Feb 13 15:25:31.935528 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:25:31.937107 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:25:31.938539 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:25:31.939411 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:25:31.939995 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:25:31.942575 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:25:31.943528 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:25:31.945088 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:25:31.949507 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:25:31.949772 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:25:31.952684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:25:31.952836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:25:31.954680 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:25:31.954835 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:25:31.955746 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:25:31.955877 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:25:31.957322 kernel: fuse: init (API version 7.39) Feb 13 15:25:31.957857 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:25:31.958013 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:25:31.962566 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:25:31.964664 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:25:31.969752 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:25:31.976334 kernel: ACPI: bus type drm_connector registered Feb 13 15:25:31.981211 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:25:31.983915 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:25:31.989916 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:25:31.998410 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:25:32.004595 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:25:32.008402 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:25:32.008445 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:25:32.012130 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:25:32.016139 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:25:32.020544 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:25:32.021225 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:25:32.025508 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:25:32.036458 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:25:32.037055 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:25:32.042588 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:25:32.043596 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:25:32.046218 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:25:32.055504 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:25:32.059309 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:25:32.060242 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:25:32.060947 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:25:32.062605 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:25:32.067777 systemd-journald[1108]: Time spent on flushing to /var/log/journal/062c6fa7334a456cb4cf74fe3b59afdb is 39.079ms for 1124 entries. Feb 13 15:25:32.067777 systemd-journald[1108]: System Journal (/var/log/journal/062c6fa7334a456cb4cf74fe3b59afdb) is 8.0M, max 584.8M, 576.8M free. Feb 13 15:25:32.135877 systemd-journald[1108]: Received client request to flush runtime journal. Feb 13 15:25:32.135925 kernel: loop0: detected capacity change from 0 to 116808 Feb 13 15:25:32.135939 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:25:32.078533 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:25:32.082663 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:25:32.088858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:25:32.090770 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:25:32.105244 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:25:32.108462 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:25:32.139926 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:25:32.147799 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:25:32.161309 kernel: loop1: detected capacity change from 0 to 194512 Feb 13 15:25:32.162261 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:25:32.165444 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:25:32.169908 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:25:32.183378 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:25:32.191854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:25:32.207104 kernel: loop2: detected capacity change from 0 to 113536 Feb 13 15:25:32.214513 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Feb 13 15:25:32.214710 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Feb 13 15:25:32.224291 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:25:32.241284 kernel: loop3: detected capacity change from 0 to 8 Feb 13 15:25:32.257296 kernel: loop4: detected capacity change from 0 to 116808 Feb 13 15:25:32.273311 kernel: loop5: detected capacity change from 0 to 194512 Feb 13 15:25:32.298301 kernel: loop6: detected capacity change from 0 to 113536 Feb 13 15:25:32.314299 kernel: loop7: detected capacity change from 0 to 8 Feb 13 15:25:32.315865 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 15:25:32.316302 (sd-merge)[1187]: Merged extensions into '/usr'. Feb 13 15:25:32.323589 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:25:32.323607 systemd[1]: Reloading... Feb 13 15:25:32.438690 zram_generator::config[1214]: No configuration found. Feb 13 15:25:32.553350 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:25:32.594738 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:25:32.641300 systemd[1]: Reloading finished in 317 ms. Feb 13 15:25:32.667303 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:25:32.674323 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:25:32.683583 systemd[1]: Starting ensure-sysext.service... Feb 13 15:25:32.686404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:25:32.702510 systemd[1]: Reloading requested from client PID 1251 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:25:32.702528 systemd[1]: Reloading... Feb 13 15:25:32.703789 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:25:32.704354 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:25:32.705106 systemd-tmpfiles[1252]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:25:32.705492 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Feb 13 15:25:32.705700 systemd-tmpfiles[1252]: ACLs are not supported, ignoring. Feb 13 15:25:32.708435 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:25:32.708581 systemd-tmpfiles[1252]: Skipping /boot Feb 13 15:25:32.717537 systemd-tmpfiles[1252]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:25:32.717665 systemd-tmpfiles[1252]: Skipping /boot Feb 13 15:25:32.783867 zram_generator::config[1279]: No configuration found. Feb 13 15:25:32.863785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:25:32.910682 systemd[1]: Reloading finished in 207 ms. Feb 13 15:25:32.931319 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:25:32.936807 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:25:32.952781 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:25:32.957655 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:25:32.966861 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:25:32.974487 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:25:32.976738 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:25:32.979804 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:25:32.983855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:25:32.992511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:25:32.999224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:25:33.009591 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:25:33.010181 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:25:33.010917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:25:33.011068 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:25:33.015450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:25:33.024541 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:25:33.025214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:25:33.029306 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:25:33.032218 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:25:33.035426 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:25:33.037337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:25:33.038553 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:25:33.038699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:25:33.043608 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:25:33.044316 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:25:33.056200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:25:33.064622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:25:33.072697 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:25:33.084051 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Feb 13 15:25:33.084711 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:25:33.094603 augenrules[1358]: No rules Feb 13 15:25:33.095691 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:25:33.097448 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:25:33.099596 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:25:33.102447 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:25:33.103306 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:25:33.105042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:25:33.106867 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:25:33.108333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:25:33.110725 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:25:33.110854 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:25:33.112377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:25:33.112518 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:25:33.114775 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:25:33.115411 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:25:33.117589 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:25:33.123482 systemd[1]: Finished ensure-sysext.service. Feb 13 15:25:33.125336 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:25:33.138149 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:25:33.138228 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:25:33.142298 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:25:33.143630 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:25:33.143912 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:25:33.146604 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:25:33.155633 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:25:33.239179 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:25:33.240029 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:25:33.260831 systemd-networkd[1386]: lo: Link UP Feb 13 15:25:33.260843 systemd-networkd[1386]: lo: Gained carrier Feb 13 15:25:33.261596 systemd-networkd[1386]: Enumeration completed Feb 13 15:25:33.262147 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:25:33.280739 systemd-resolved[1327]: Positive Trust Anchors: Feb 13 15:25:33.280818 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:25:33.280850 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:25:33.285967 systemd-resolved[1327]: Using system hostname 'ci-4152-2-1-1-73ff0440f7'. Feb 13 15:25:33.293907 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:25:33.294603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:25:33.296005 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:25:33.296051 systemd[1]: Reached target network.target - Network. Feb 13 15:25:33.297402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:25:33.330990 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:25:33.331004 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:25:33.331717 systemd-networkd[1386]: eth0: Link UP Feb 13 15:25:33.331724 systemd-networkd[1386]: eth0: Gained carrier Feb 13 15:25:33.331737 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:25:33.398378 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1398) Feb 13 15:25:33.410442 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:25:33.410711 systemd-networkd[1386]: eth0: DHCPv4 address 188.245.200.94/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 15:25:33.413299 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Feb 13 15:25:33.417776 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 15:25:33.425632 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:25:33.433082 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 15:25:33.433211 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:25:33.437516 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:25:33.439684 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:25:33.444475 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:25:33.445058 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:25:33.445089 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:25:33.446741 systemd-networkd[1386]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:25:33.446753 systemd-networkd[1386]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:25:33.447301 systemd-networkd[1386]: eth1: Link UP Feb 13 15:25:33.447308 systemd-networkd[1386]: eth1: Gained carrier Feb 13 15:25:33.447322 systemd-networkd[1386]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:25:33.447593 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Feb 13 15:25:33.456001 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:25:33.462157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:25:33.462332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:25:33.463438 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Feb 13 15:25:33.466927 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:25:33.467345 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:25:33.468612 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:25:33.472398 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:25:33.472592 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:25:33.473846 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:25:33.478383 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 15:25:33.478472 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 15:25:33.478496 kernel: [drm] features: -context_init Feb 13 15:25:33.482468 systemd-networkd[1386]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:25:33.483371 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Feb 13 15:25:33.483608 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Feb 13 15:25:33.485375 kernel: [drm] number of scanouts: 1 Feb 13 15:25:33.486308 kernel: [drm] number of cap sets: 0 Feb 13 15:25:33.488304 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 15:25:33.494294 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:25:33.505541 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 15:25:33.534961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:25:33.542532 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:25:33.542785 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:25:33.548606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:25:33.614200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:25:33.681365 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:25:33.689609 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:25:33.703719 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:25:33.733327 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:25:33.735913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:25:33.737598 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:25:33.739049 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:25:33.740360 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:25:33.741206 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:25:33.741911 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:25:33.742607 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:25:33.743195 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:25:33.743223 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:25:33.743730 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:25:33.744995 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:25:33.747145 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:25:33.757672 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:25:33.760821 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:25:33.762067 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:25:33.762732 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:25:33.763194 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:25:33.763752 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:25:33.763780 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:25:33.766434 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:25:33.771501 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:25:33.773534 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:25:33.775601 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:25:33.777584 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:25:33.780450 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:25:33.781092 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:25:33.784510 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:25:33.795545 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:25:33.797418 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 15:25:33.801473 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:25:33.806823 jq[1452]: false Feb 13 15:25:33.807518 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:25:33.813508 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:25:33.816309 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:25:33.816808 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:25:33.822839 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:25:33.828429 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:25:33.830916 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:25:33.836562 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:25:33.836741 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:25:33.839720 dbus-daemon[1451]: [system] SELinux support is enabled Feb 13 15:25:33.839968 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:25:33.849081 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:25:33.849298 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:25:33.856195 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:25:33.856427 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:25:33.858780 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:25:33.858813 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:25:33.877608 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:25:33.877783 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:25:33.880395 extend-filesystems[1453]: Found loop4 Feb 13 15:25:33.883859 jq[1464]: true Feb 13 15:25:33.884077 extend-filesystems[1453]: Found loop5 Feb 13 15:25:33.884077 extend-filesystems[1453]: Found loop6 Feb 13 15:25:33.884077 extend-filesystems[1453]: Found loop7 Feb 13 15:25:33.884077 extend-filesystems[1453]: Found sda Feb 13 15:25:33.884077 extend-filesystems[1453]: Found sda1 Feb 13 15:25:33.884077 extend-filesystems[1453]: Found sda2 Feb 13 15:25:33.884077 extend-filesystems[1453]: Found sda3 Feb 13 15:25:33.884077 extend-filesystems[1453]: Found usr Feb 13 15:25:33.884077 extend-filesystems[1453]: Found sda4 Feb 13 15:25:33.884077 extend-filesystems[1453]: Found sda6 Feb 13 15:25:33.884077 extend-filesystems[1453]: Found sda7 Feb 13 15:25:33.884077 extend-filesystems[1453]: Found sda9 Feb 13 15:25:33.884077 extend-filesystems[1453]: Checking size of /dev/sda9 Feb 13 15:25:33.928967 update_engine[1463]: I20250213 15:25:33.928048 1463 main.cc:92] Flatcar Update Engine starting Feb 13 15:25:33.929196 tar[1469]: linux-arm64/helm Feb 13 15:25:33.929427 coreos-metadata[1450]: Feb 13 15:25:33.914 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 15:25:33.929427 coreos-metadata[1450]: Feb 13 15:25:33.914 INFO Fetch successful Feb 13 15:25:33.929427 coreos-metadata[1450]: Feb 13 15:25:33.914 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 15:25:33.929427 coreos-metadata[1450]: Feb 13 15:25:33.914 INFO Fetch successful Feb 13 15:25:33.904005 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:25:33.929790 jq[1487]: true Feb 13 15:25:33.943096 extend-filesystems[1453]: Resized partition /dev/sda9 Feb 13 15:25:33.943741 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:25:33.949711 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:25:33.954263 update_engine[1463]: I20250213 15:25:33.951496 1463 update_check_scheduler.cc:74] Next update check in 11m56s Feb 13 15:25:33.954322 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 15:25:33.958470 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:25:34.007476 systemd-logind[1461]: New seat seat0. Feb 13 15:25:34.009377 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:25:34.009398 systemd-logind[1461]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 15:25:34.009654 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:25:34.045262 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:25:34.050393 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:25:34.072756 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:25:34.078298 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:25:34.091511 systemd[1]: Starting sshkeys.service... Feb 13 15:25:34.116295 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1400) Feb 13 15:25:34.138338 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 15:25:34.140704 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:25:34.154630 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:25:34.157409 extend-filesystems[1496]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:25:34.157409 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 15:25:34.157409 extend-filesystems[1496]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 15:25:34.163374 extend-filesystems[1453]: Resized filesystem in /dev/sda9 Feb 13 15:25:34.163374 extend-filesystems[1453]: Found sr0 Feb 13 15:25:34.163597 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:25:34.165581 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:25:34.229002 coreos-metadata[1527]: Feb 13 15:25:34.225 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 15:25:34.229002 coreos-metadata[1527]: Feb 13 15:25:34.226 INFO Fetch successful Feb 13 15:25:34.238385 unknown[1527]: wrote ssh authorized keys file for user: core Feb 13 15:25:34.270298 update-ssh-keys[1534]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:25:34.271384 containerd[1484]: time="2025-02-13T15:25:34.271188200Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:25:34.276887 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:25:34.278947 systemd[1]: Finished sshkeys.service. Feb 13 15:25:34.308626 containerd[1484]: time="2025-02-13T15:25:34.307976720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:25:34.311142 containerd[1484]: time="2025-02-13T15:25:34.311093080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:25:34.311142 containerd[1484]: time="2025-02-13T15:25:34.311133240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:25:34.311254 containerd[1484]: time="2025-02-13T15:25:34.311153040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:25:34.311367 containerd[1484]: time="2025-02-13T15:25:34.311345800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:25:34.311403 containerd[1484]: time="2025-02-13T15:25:34.311368960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:25:34.311504 containerd[1484]: time="2025-02-13T15:25:34.311429400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:25:34.311504 containerd[1484]: time="2025-02-13T15:25:34.311444920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:25:34.311745 containerd[1484]: time="2025-02-13T15:25:34.311648880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:25:34.311745 containerd[1484]: time="2025-02-13T15:25:34.311670920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:25:34.311745 containerd[1484]: time="2025-02-13T15:25:34.311685400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:25:34.311745 containerd[1484]: time="2025-02-13T15:25:34.311694000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:25:34.311839 containerd[1484]: time="2025-02-13T15:25:34.311765280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:25:34.312126 containerd[1484]: time="2025-02-13T15:25:34.311942040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:25:34.312126 containerd[1484]: time="2025-02-13T15:25:34.312042400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:25:34.312126 containerd[1484]: time="2025-02-13T15:25:34.312056680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:25:34.312193 containerd[1484]: time="2025-02-13T15:25:34.312132000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:25:34.312193 containerd[1484]: time="2025-02-13T15:25:34.312169960Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:25:34.318899 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:25:34.322171 containerd[1484]: time="2025-02-13T15:25:34.322125560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:25:34.322256 containerd[1484]: time="2025-02-13T15:25:34.322203960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:25:34.322256 containerd[1484]: time="2025-02-13T15:25:34.322221760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:25:34.322256 containerd[1484]: time="2025-02-13T15:25:34.322237800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:25:34.322325 containerd[1484]: time="2025-02-13T15:25:34.322299920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:25:34.322970 containerd[1484]: time="2025-02-13T15:25:34.322508240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:25:34.325598 containerd[1484]: time="2025-02-13T15:25:34.325563320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.325833600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.325858000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.325875000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.325890360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.325903560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.325918200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.325931880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.325952840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.325966520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.325978520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.325989520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.326010640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.326023680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.326985 containerd[1484]: time="2025-02-13T15:25:34.326035200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326048680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326060360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326075440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326087040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326099520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326111320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326126880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326146760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326158400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326170720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326185160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326206040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326224600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.327313 containerd[1484]: time="2025-02-13T15:25:34.326235720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:25:34.328589 containerd[1484]: time="2025-02-13T15:25:34.327836800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:25:34.328589 containerd[1484]: time="2025-02-13T15:25:34.327872000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:25:34.328589 containerd[1484]: time="2025-02-13T15:25:34.327884840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:25:34.328589 containerd[1484]: time="2025-02-13T15:25:34.327898800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:25:34.328589 containerd[1484]: time="2025-02-13T15:25:34.327908240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.328589 containerd[1484]: time="2025-02-13T15:25:34.327934240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:25:34.328589 containerd[1484]: time="2025-02-13T15:25:34.327946000Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:25:34.328589 containerd[1484]: time="2025-02-13T15:25:34.327957120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:25:34.330863 containerd[1484]: time="2025-02-13T15:25:34.329392880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:25:34.330863 containerd[1484]: time="2025-02-13T15:25:34.329497240Z" level=info msg="Connect containerd service" Feb 13 15:25:34.330863 containerd[1484]: time="2025-02-13T15:25:34.329546200Z" level=info msg="using legacy CRI server" Feb 13 15:25:34.330863 containerd[1484]: time="2025-02-13T15:25:34.329554400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:25:34.330863 containerd[1484]: time="2025-02-13T15:25:34.329804320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:25:34.332747 containerd[1484]: time="2025-02-13T15:25:34.332709760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:25:34.333695 containerd[1484]: time="2025-02-13T15:25:34.333656920Z" level=info msg="Start subscribing containerd event" Feb 13 15:25:34.333791 containerd[1484]: time="2025-02-13T15:25:34.333777320Z" level=info msg="Start recovering state" Feb 13 15:25:34.333919 containerd[1484]: time="2025-02-13T15:25:34.333904400Z" level=info msg="Start event monitor" Feb 13 15:25:34.334295 containerd[1484]: time="2025-02-13T15:25:34.334263600Z" level=info msg="Start snapshots syncer" Feb 13 15:25:34.334356 containerd[1484]: time="2025-02-13T15:25:34.334344120Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:25:34.334416 containerd[1484]: time="2025-02-13T15:25:34.334403680Z" level=info msg="Start streaming server" Feb 13 15:25:34.336515 containerd[1484]: time="2025-02-13T15:25:34.336484960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:25:34.337208 containerd[1484]: time="2025-02-13T15:25:34.337186960Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:25:34.337358 containerd[1484]: time="2025-02-13T15:25:34.337344720Z" level=info msg="containerd successfully booted in 0.068946s" Feb 13 15:25:34.337443 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:25:34.605787 tar[1469]: linux-arm64/LICENSE Feb 13 15:25:34.605976 tar[1469]: linux-arm64/README.md Feb 13 15:25:34.616369 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:25:34.912434 systemd-networkd[1386]: eth0: Gained IPv6LL Feb 13 15:25:34.912978 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Feb 13 15:25:34.916536 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:25:34.919025 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:25:34.932590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:34.935897 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:25:34.987323 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:25:35.040391 systemd-networkd[1386]: eth1: Gained IPv6LL Feb 13 15:25:35.040840 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Feb 13 15:25:35.252489 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:25:35.275794 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:25:35.283350 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:25:35.290954 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:25:35.293331 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:25:35.301981 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:25:35.313140 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:25:35.322868 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:25:35.327426 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:25:35.328732 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:25:35.624556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:35.626450 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:25:35.627399 systemd[1]: Startup finished in 762ms (kernel) + 5.566s (initrd) + 4.400s (userspace) = 10.730s. Feb 13 15:25:35.628921 (kubelet)[1580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:25:36.247454 kubelet[1580]: E0213 15:25:36.247329 1580 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:25:36.253086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:25:36.253489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:25:46.503845 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:25:46.510754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:46.610940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:46.626879 (kubelet)[1600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:25:46.685419 kubelet[1600]: E0213 15:25:46.685337 1600 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:25:46.688807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:25:46.688981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:25:56.939672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:25:56.949677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:25:57.060301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:25:57.065151 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:25:57.119571 kubelet[1616]: E0213 15:25:57.119503 1616 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:25:57.122804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:25:57.122986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:26:05.178262 systemd-timesyncd[1377]: Contacted time server 5.189.151.39:123 (2.flatcar.pool.ntp.org). Feb 13 15:26:05.178408 systemd-timesyncd[1377]: Initial clock synchronization to Thu 2025-02-13 15:26:05.417570 UTC. Feb 13 15:26:07.155854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 15:26:07.162689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:07.279486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:07.281516 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:07.338208 kubelet[1632]: E0213 15:26:07.338141 1632 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:07.340771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:07.340958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:26:17.404563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 15:26:17.413762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:17.521291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:17.530910 (kubelet)[1648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:17.588954 kubelet[1648]: E0213 15:26:17.588884 1648 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:17.592496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:17.592648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:26:18.899624 update_engine[1463]: I20250213 15:26:18.899317 1463 update_attempter.cc:509] Updating boot flags... Feb 13 15:26:18.946370 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1665) Feb 13 15:26:19.000328 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1669) Feb 13 15:26:27.654896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 15:26:27.663657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:27.776603 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:27.788329 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:27.842417 kubelet[1682]: E0213 15:26:27.842253 1682 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:27.845449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:27.845651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:26:37.904356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 15:26:37.920627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:38.025936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:38.041931 (kubelet)[1698]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:38.094593 kubelet[1698]: E0213 15:26:38.094521 1698 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:38.098177 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:38.098385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:26:48.154469 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 15:26:48.162539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:48.261669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:48.272828 (kubelet)[1714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:48.326307 kubelet[1714]: E0213 15:26:48.326186 1714 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:48.329152 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:48.329391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:26:58.404533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 15:26:58.414689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:58.527311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:58.544021 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:58.595334 kubelet[1731]: E0213 15:26:58.595223 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:58.597815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:58.597946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:08.654609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 15:27:08.666723 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:08.772869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:08.786860 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:08.835822 kubelet[1746]: E0213 15:27:08.835671 1746 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:08.838169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:08.838425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:18.904190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Feb 13 15:27:18.913556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:19.011406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:19.016568 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:19.066544 kubelet[1763]: E0213 15:27:19.066456 1763 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:19.071691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:19.072262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:29.154439 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Feb 13 15:27:29.165846 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:29.266535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:29.283076 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:29.333437 kubelet[1779]: E0213 15:27:29.333389 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:29.336632 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:29.336795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:30.825008 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:27:30.830714 systemd[1]: Started sshd@0-188.245.200.94:22-139.178.89.65:59812.service - OpenSSH per-connection server daemon (139.178.89.65:59812). Feb 13 15:27:31.833457 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 59812 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:27:31.836698 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:31.847015 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:27:31.854022 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:27:31.858035 systemd-logind[1461]: New session 1 of user core. Feb 13 15:27:31.869448 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:27:31.878898 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:27:31.882426 (systemd)[1794]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:27:31.988680 systemd[1794]: Queued start job for default target default.target. Feb 13 15:27:31.997870 systemd[1794]: Created slice app.slice - User Application Slice. Feb 13 15:27:31.998105 systemd[1794]: Reached target paths.target - Paths. Feb 13 15:27:31.998135 systemd[1794]: Reached target timers.target - Timers. Feb 13 15:27:32.000038 systemd[1794]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:27:32.015514 systemd[1794]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:27:32.015706 systemd[1794]: Reached target sockets.target - Sockets. Feb 13 15:27:32.015720 systemd[1794]: Reached target basic.target - Basic System. Feb 13 15:27:32.015767 systemd[1794]: Reached target default.target - Main User Target. Feb 13 15:27:32.015797 systemd[1794]: Startup finished in 126ms. Feb 13 15:27:32.016060 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:27:32.024566 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:27:32.718232 systemd[1]: Started sshd@1-188.245.200.94:22-139.178.89.65:59826.service - OpenSSH per-connection server daemon (139.178.89.65:59826). Feb 13 15:27:33.710111 sshd[1805]: Accepted publickey for core from 139.178.89.65 port 59826 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:27:33.712881 sshd-session[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:33.721254 systemd-logind[1461]: New session 2 of user core. Feb 13 15:27:33.726616 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:27:34.399640 sshd[1807]: Connection closed by 139.178.89.65 port 59826 Feb 13 15:27:34.399521 sshd-session[1805]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:34.404888 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:27:34.407166 systemd[1]: sshd@1-188.245.200.94:22-139.178.89.65:59826.service: Deactivated successfully. Feb 13 15:27:34.409803 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:27:34.413095 systemd-logind[1461]: Removed session 2. Feb 13 15:27:34.575795 systemd[1]: Started sshd@2-188.245.200.94:22-139.178.89.65:59832.service - OpenSSH per-connection server daemon (139.178.89.65:59832). Feb 13 15:27:35.553638 sshd[1812]: Accepted publickey for core from 139.178.89.65 port 59832 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:27:35.555371 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:35.561039 systemd-logind[1461]: New session 3 of user core. Feb 13 15:27:35.571657 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:27:36.224646 sshd[1814]: Connection closed by 139.178.89.65 port 59832 Feb 13 15:27:36.225542 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:36.230367 systemd[1]: sshd@2-188.245.200.94:22-139.178.89.65:59832.service: Deactivated successfully. Feb 13 15:27:36.232454 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:27:36.233205 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:27:36.234622 systemd-logind[1461]: Removed session 3. Feb 13 15:27:36.405963 systemd[1]: Started sshd@3-188.245.200.94:22-139.178.89.65:50532.service - OpenSSH per-connection server daemon (139.178.89.65:50532). Feb 13 15:27:37.395043 sshd[1819]: Accepted publickey for core from 139.178.89.65 port 50532 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:27:37.397376 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:37.403448 systemd-logind[1461]: New session 4 of user core. Feb 13 15:27:37.408678 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:27:38.079495 sshd[1821]: Connection closed by 139.178.89.65 port 50532 Feb 13 15:27:38.078584 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:38.083813 systemd[1]: sshd@3-188.245.200.94:22-139.178.89.65:50532.service: Deactivated successfully. Feb 13 15:27:38.085944 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:27:38.087011 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:27:38.088386 systemd-logind[1461]: Removed session 4. Feb 13 15:27:38.251766 systemd[1]: Started sshd@4-188.245.200.94:22-139.178.89.65:50542.service - OpenSSH per-connection server daemon (139.178.89.65:50542). Feb 13 15:27:39.227912 sshd[1826]: Accepted publickey for core from 139.178.89.65 port 50542 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:27:39.229974 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:39.235481 systemd-logind[1461]: New session 5 of user core. Feb 13 15:27:39.246581 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:27:39.404006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Feb 13 15:27:39.411664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:39.519769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:39.530720 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:39.589951 kubelet[1837]: E0213 15:27:39.589871 1837 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:39.594213 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:39.594651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:39.758837 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:27:39.759158 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:39.775812 sudo[1845]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:39.934090 sshd[1828]: Connection closed by 139.178.89.65 port 50542 Feb 13 15:27:39.935574 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:39.942013 systemd[1]: sshd@4-188.245.200.94:22-139.178.89.65:50542.service: Deactivated successfully. Feb 13 15:27:39.944017 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:27:39.945450 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:27:39.946834 systemd-logind[1461]: Removed session 5. Feb 13 15:27:40.111362 systemd[1]: Started sshd@5-188.245.200.94:22-139.178.89.65:50558.service - OpenSSH per-connection server daemon (139.178.89.65:50558). Feb 13 15:27:41.116167 sshd[1850]: Accepted publickey for core from 139.178.89.65 port 50558 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:27:41.119712 sshd-session[1850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:41.134766 systemd-logind[1461]: New session 6 of user core. Feb 13 15:27:41.137953 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:27:41.644954 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:27:41.645288 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:41.650802 sudo[1854]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:41.661239 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:27:41.661836 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:41.679931 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:27:41.726499 augenrules[1876]: No rules Feb 13 15:27:41.727348 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:27:41.727576 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:27:41.730154 sudo[1853]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:41.891331 sshd[1852]: Connection closed by 139.178.89.65 port 50558 Feb 13 15:27:41.891156 sshd-session[1850]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:41.896892 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:27:41.897004 systemd[1]: sshd@5-188.245.200.94:22-139.178.89.65:50558.service: Deactivated successfully. Feb 13 15:27:41.900357 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:27:41.905853 systemd-logind[1461]: Removed session 6. Feb 13 15:27:42.060338 systemd[1]: Started sshd@6-188.245.200.94:22-139.178.89.65:50572.service - OpenSSH per-connection server daemon (139.178.89.65:50572). Feb 13 15:27:43.065031 sshd[1884]: Accepted publickey for core from 139.178.89.65 port 50572 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:27:43.070238 sshd-session[1884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:43.080219 systemd-logind[1461]: New session 7 of user core. Feb 13 15:27:43.088634 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:27:43.588384 sudo[1887]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:27:43.588691 sudo[1887]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:43.884171 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:27:43.884579 (dockerd)[1906]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:27:44.118795 dockerd[1906]: time="2025-02-13T15:27:44.118697897Z" level=info msg="Starting up" Feb 13 15:27:44.226436 dockerd[1906]: time="2025-02-13T15:27:44.226055947Z" level=info msg="Loading containers: start." Feb 13 15:27:44.401353 kernel: Initializing XFRM netlink socket Feb 13 15:27:44.489614 systemd-networkd[1386]: docker0: Link UP Feb 13 15:27:44.527693 dockerd[1906]: time="2025-02-13T15:27:44.527614033Z" level=info msg="Loading containers: done." Feb 13 15:27:44.545621 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1812123940-merged.mount: Deactivated successfully. Feb 13 15:27:44.549435 dockerd[1906]: time="2025-02-13T15:27:44.549377248Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:27:44.549566 dockerd[1906]: time="2025-02-13T15:27:44.549495444Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 15:27:44.549631 dockerd[1906]: time="2025-02-13T15:27:44.549610919Z" level=info msg="Daemon has completed initialization" Feb 13 15:27:44.588646 dockerd[1906]: time="2025-02-13T15:27:44.588585202Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:27:44.589596 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:27:45.696154 containerd[1484]: time="2025-02-13T15:27:45.695777997Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\"" Feb 13 15:27:46.385135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1010649043.mount: Deactivated successfully. Feb 13 15:27:47.770659 containerd[1484]: time="2025-02-13T15:27:47.769505199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:47.772257 containerd[1484]: time="2025-02-13T15:27:47.772209243Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205953" Feb 13 15:27:47.773703 containerd[1484]: time="2025-02-13T15:27:47.773669402Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:47.777350 containerd[1484]: time="2025-02-13T15:27:47.777294261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:47.779793 containerd[1484]: time="2025-02-13T15:27:47.779725153Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 2.083878319s" Feb 13 15:27:47.779902 containerd[1484]: time="2025-02-13T15:27:47.779792951Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\"" Feb 13 15:27:47.803771 containerd[1484]: time="2025-02-13T15:27:47.803723163Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\"" Feb 13 15:27:49.654041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Feb 13 15:27:49.661757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:49.806783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:49.809676 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:49.869424 kubelet[2168]: E0213 15:27:49.869248 2168 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:49.872634 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:49.872810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:51.149964 containerd[1484]: time="2025-02-13T15:27:51.149891440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:51.152027 containerd[1484]: time="2025-02-13T15:27:51.151947047Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383111" Feb 13 15:27:51.152848 containerd[1484]: time="2025-02-13T15:27:51.152600557Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:51.155683 containerd[1484]: time="2025-02-13T15:27:51.155623228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:51.157090 containerd[1484]: time="2025-02-13T15:27:51.156918407Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 3.352874334s" Feb 13 15:27:51.157090 containerd[1484]: time="2025-02-13T15:27:51.156956247Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\"" Feb 13 15:27:51.179860 containerd[1484]: time="2025-02-13T15:27:51.179815481Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\"" Feb 13 15:27:52.752944 containerd[1484]: time="2025-02-13T15:27:52.752870009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:52.754666 containerd[1484]: time="2025-02-13T15:27:52.754248071Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15767000" Feb 13 15:27:52.755610 containerd[1484]: time="2025-02-13T15:27:52.755561093Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:52.759106 containerd[1484]: time="2025-02-13T15:27:52.759046247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:52.760563 containerd[1484]: time="2025-02-13T15:27:52.760428989Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 1.580571109s" Feb 13 15:27:52.760563 containerd[1484]: time="2025-02-13T15:27:52.760468188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\"" Feb 13 15:27:52.781338 containerd[1484]: time="2025-02-13T15:27:52.781292792Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:27:53.746016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1353139839.mount: Deactivated successfully. Feb 13 15:27:54.480683 containerd[1484]: time="2025-02-13T15:27:54.480616025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:54.482005 containerd[1484]: time="2025-02-13T15:27:54.481769536Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273401" Feb 13 15:27:54.482835 containerd[1484]: time="2025-02-13T15:27:54.482803968Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:54.486174 containerd[1484]: time="2025-02-13T15:27:54.486111621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:54.487043 containerd[1484]: time="2025-02-13T15:27:54.486625177Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.705088468s" Feb 13 15:27:54.487043 containerd[1484]: time="2025-02-13T15:27:54.486652097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:27:54.512746 containerd[1484]: time="2025-02-13T15:27:54.512512610Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:27:55.142412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount328162183.mount: Deactivated successfully. Feb 13 15:27:55.792281 containerd[1484]: time="2025-02-13T15:27:55.792067218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:55.796297 containerd[1484]: time="2025-02-13T15:27:55.793112133Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Feb 13 15:27:55.804979 containerd[1484]: time="2025-02-13T15:27:55.804684749Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:55.814591 containerd[1484]: time="2025-02-13T15:27:55.814495135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:55.816221 containerd[1484]: time="2025-02-13T15:27:55.815382130Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.302828881s" Feb 13 15:27:55.816221 containerd[1484]: time="2025-02-13T15:27:55.815421050Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 15:27:55.840297 containerd[1484]: time="2025-02-13T15:27:55.840239593Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:27:56.388041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044570174.mount: Deactivated successfully. Feb 13 15:27:56.395118 containerd[1484]: time="2025-02-13T15:27:56.395059012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:56.396251 containerd[1484]: time="2025-02-13T15:27:56.396192568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Feb 13 15:27:56.397297 containerd[1484]: time="2025-02-13T15:27:56.397173445Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:56.400291 containerd[1484]: time="2025-02-13T15:27:56.399978077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:56.401016 containerd[1484]: time="2025-02-13T15:27:56.400842674Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 560.538321ms" Feb 13 15:27:56.401016 containerd[1484]: time="2025-02-13T15:27:56.400916274Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 13 15:27:56.423776 containerd[1484]: time="2025-02-13T15:27:56.423521484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Feb 13 15:27:57.003438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount382461614.mount: Deactivated successfully. Feb 13 15:27:59.904090 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Feb 13 15:27:59.913704 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:00.040582 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:00.048984 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:28:00.105361 kubelet[2309]: E0213 15:28:00.105310 2309 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:28:00.108195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:28:00.108359 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:28:00.708493 containerd[1484]: time="2025-02-13T15:28:00.707299061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:00.709867 containerd[1484]: time="2025-02-13T15:28:00.709779915Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866" Feb 13 15:28:00.711500 containerd[1484]: time="2025-02-13T15:28:00.711418205Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:00.716672 containerd[1484]: time="2025-02-13T15:28:00.716620355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:00.719505 containerd[1484]: time="2025-02-13T15:28:00.719463252Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.295901648s" Feb 13 15:28:00.719660 containerd[1484]: time="2025-02-13T15:28:00.719643013Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Feb 13 15:28:06.181165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:06.191567 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:06.204739 systemd[1]: Reloading requested from client PID 2388 ('systemctl') (unit session-7.scope)... Feb 13 15:28:06.204929 systemd[1]: Reloading... Feb 13 15:28:06.344304 zram_generator::config[2425]: No configuration found. Feb 13 15:28:06.446903 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:28:06.513896 systemd[1]: Reloading finished in 308 ms. Feb 13 15:28:06.565255 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:28:06.565463 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:28:06.565911 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:06.571859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:06.680010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:06.690694 (kubelet)[2476]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:28:06.745157 kubelet[2476]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:28:06.745157 kubelet[2476]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:28:06.745157 kubelet[2476]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:28:06.745157 kubelet[2476]: I0213 15:28:06.745095 2476 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:28:07.775876 kubelet[2476]: I0213 15:28:07.775708 2476 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:28:07.775876 kubelet[2476]: I0213 15:28:07.775762 2476 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:28:07.776593 kubelet[2476]: I0213 15:28:07.776380 2476 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:28:07.797972 kubelet[2476]: I0213 15:28:07.797717 2476 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:28:07.798552 kubelet[2476]: E0213 15:28:07.798530 2476 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://188.245.200.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:07.808527 kubelet[2476]: I0213 15:28:07.808497 2476 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:28:07.810773 kubelet[2476]: I0213 15:28:07.810726 2476 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:28:07.811756 kubelet[2476]: I0213 15:28:07.811242 2476 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:28:07.811756 kubelet[2476]: I0213 15:28:07.811340 2476 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:28:07.811756 kubelet[2476]: I0213 15:28:07.811364 2476 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:28:07.813658 kubelet[2476]: I0213 15:28:07.813628 2476 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:28:07.816595 kubelet[2476]: I0213 15:28:07.816566 2476 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:28:07.816725 kubelet[2476]: I0213 15:28:07.816713 2476 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:28:07.816800 kubelet[2476]: I0213 15:28:07.816791 2476 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:28:07.816896 kubelet[2476]: I0213 15:28:07.816883 2476 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:28:07.820150 kubelet[2476]: I0213 15:28:07.820123 2476 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:28:07.820683 kubelet[2476]: I0213 15:28:07.820652 2476 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:28:07.821508 kubelet[2476]: W0213 15:28:07.821471 2476 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:28:07.822408 kubelet[2476]: I0213 15:28:07.822378 2476 server.go:1256] "Started kubelet" Feb 13 15:28:07.822708 kubelet[2476]: W0213 15:28:07.822518 2476 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://188.245.200.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-1-73ff0440f7&limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:07.822708 kubelet[2476]: E0213 15:28:07.822572 2476 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.200.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-1-73ff0440f7&limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:07.827717 kubelet[2476]: W0213 15:28:07.827555 2476 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://188.245.200.94:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:07.827717 kubelet[2476]: E0213 15:28:07.827617 2476 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.200.94:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:07.829476 kubelet[2476]: I0213 15:28:07.829300 2476 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:28:07.830502 kubelet[2476]: E0213 15:28:07.830256 2476 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.200.94:6443/api/v1/namespaces/default/events\": dial tcp 188.245.200.94:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-1-1-73ff0440f7.1823ce18fd7aab0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-1-1-73ff0440f7,UID:ci-4152-2-1-1-73ff0440f7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-1-1-73ff0440f7,},FirstTimestamp:2025-02-13 15:28:07.822355213 +0000 UTC m=+1.126657462,LastTimestamp:2025-02-13 15:28:07.822355213 +0000 UTC m=+1.126657462,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-1-1-73ff0440f7,}" Feb 13 15:28:07.833166 kubelet[2476]: I0213 15:28:07.833129 2476 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:28:07.834044 kubelet[2476]: I0213 15:28:07.834008 2476 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:28:07.837148 kubelet[2476]: I0213 15:28:07.835650 2476 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:28:07.837148 kubelet[2476]: I0213 15:28:07.836969 2476 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:28:07.837148 kubelet[2476]: I0213 15:28:07.837026 2476 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:28:07.837148 kubelet[2476]: I0213 15:28:07.837108 2476 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:28:07.837343 kubelet[2476]: I0213 15:28:07.837238 2476 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:28:07.843488 kubelet[2476]: W0213 15:28:07.843433 2476 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://188.245.200.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:07.843660 kubelet[2476]: E0213 15:28:07.843647 2476 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://188.245.200.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:07.843802 kubelet[2476]: E0213 15:28:07.843789 2476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.200.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-1-73ff0440f7?timeout=10s\": dial tcp 188.245.200.94:6443: connect: connection refused" interval="200ms" Feb 13 15:28:07.844078 kubelet[2476]: I0213 15:28:07.844059 2476 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:28:07.844248 kubelet[2476]: I0213 15:28:07.844229 2476 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:28:07.847085 kubelet[2476]: I0213 15:28:07.847039 2476 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:28:07.854320 kubelet[2476]: I0213 15:28:07.853400 2476 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:28:07.854444 kubelet[2476]: I0213 15:28:07.854426 2476 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:28:07.854469 kubelet[2476]: I0213 15:28:07.854444 2476 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:28:07.854469 kubelet[2476]: I0213 15:28:07.854464 2476 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:28:07.854532 kubelet[2476]: E0213 15:28:07.854509 2476 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:28:07.861434 kubelet[2476]: E0213 15:28:07.861393 2476 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:28:07.861685 kubelet[2476]: W0213 15:28:07.861619 2476 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://188.245.200.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:07.861685 kubelet[2476]: E0213 15:28:07.861673 2476 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.200.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:07.882050 kubelet[2476]: I0213 15:28:07.882024 2476 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:28:07.882564 kubelet[2476]: I0213 15:28:07.882214 2476 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:28:07.882564 kubelet[2476]: I0213 15:28:07.882238 2476 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:28:07.884262 kubelet[2476]: I0213 15:28:07.884118 2476 policy_none.go:49] "None policy: Start" Feb 13 15:28:07.884946 kubelet[2476]: I0213 15:28:07.884927 2476 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:28:07.884997 kubelet[2476]: I0213 15:28:07.884977 2476 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:28:07.893149 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:28:07.902658 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:28:07.906624 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:28:07.920520 kubelet[2476]: I0213 15:28:07.919989 2476 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:28:07.921060 kubelet[2476]: I0213 15:28:07.920764 2476 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:28:07.924587 kubelet[2476]: E0213 15:28:07.924465 2476 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-1-1-73ff0440f7\" not found" Feb 13 15:28:07.938661 kubelet[2476]: I0213 15:28:07.938619 2476 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:07.939411 kubelet[2476]: E0213 15:28:07.939385 2476 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.200.94:6443/api/v1/nodes\": dial tcp 188.245.200.94:6443: connect: connection refused" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:07.954809 kubelet[2476]: I0213 15:28:07.954716 2476 topology_manager.go:215] "Topology Admit Handler" podUID="d8f1926d9a7b1f00897cc97fdec61a27" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:07.957700 kubelet[2476]: I0213 15:28:07.957416 2476 topology_manager.go:215] "Topology Admit Handler" podUID="38c751677bda5d6519638a10347b60db" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:07.959760 kubelet[2476]: I0213 15:28:07.959712 2476 topology_manager.go:215] "Topology Admit Handler" podUID="30e456874c12299e2604c8eedf7ba566" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:07.969479 systemd[1]: Created slice kubepods-burstable-podd8f1926d9a7b1f00897cc97fdec61a27.slice - libcontainer container kubepods-burstable-podd8f1926d9a7b1f00897cc97fdec61a27.slice. Feb 13 15:28:07.994889 systemd[1]: Created slice kubepods-burstable-pod38c751677bda5d6519638a10347b60db.slice - libcontainer container kubepods-burstable-pod38c751677bda5d6519638a10347b60db.slice. Feb 13 15:28:08.011565 systemd[1]: Created slice kubepods-burstable-pod30e456874c12299e2604c8eedf7ba566.slice - libcontainer container kubepods-burstable-pod30e456874c12299e2604c8eedf7ba566.slice. Feb 13 15:28:08.041330 kubelet[2476]: I0213 15:28:08.037701 2476 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8f1926d9a7b1f00897cc97fdec61a27-ca-certs\") pod \"kube-apiserver-ci-4152-2-1-1-73ff0440f7\" (UID: \"d8f1926d9a7b1f00897cc97fdec61a27\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.045195 kubelet[2476]: E0213 15:28:08.045162 2476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.200.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-1-73ff0440f7?timeout=10s\": dial tcp 188.245.200.94:6443: connect: connection refused" interval="400ms" Feb 13 15:28:08.138692 kubelet[2476]: I0213 15:28:08.138616 2476 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38c751677bda5d6519638a10347b60db-ca-certs\") pod \"kube-controller-manager-ci-4152-2-1-1-73ff0440f7\" (UID: \"38c751677bda5d6519638a10347b60db\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.138971 kubelet[2476]: I0213 15:28:08.138716 2476 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38c751677bda5d6519638a10347b60db-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-1-1-73ff0440f7\" (UID: \"38c751677bda5d6519638a10347b60db\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.138971 kubelet[2476]: I0213 15:28:08.138869 2476 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30e456874c12299e2604c8eedf7ba566-kubeconfig\") pod \"kube-scheduler-ci-4152-2-1-1-73ff0440f7\" (UID: \"30e456874c12299e2604c8eedf7ba566\") " pod="kube-system/kube-scheduler-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.139175 kubelet[2476]: I0213 15:28:08.139002 2476 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8f1926d9a7b1f00897cc97fdec61a27-k8s-certs\") pod \"kube-apiserver-ci-4152-2-1-1-73ff0440f7\" (UID: \"d8f1926d9a7b1f00897cc97fdec61a27\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.139225 kubelet[2476]: I0213 15:28:08.139206 2476 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8f1926d9a7b1f00897cc97fdec61a27-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-1-1-73ff0440f7\" (UID: \"d8f1926d9a7b1f00897cc97fdec61a27\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.139282 kubelet[2476]: I0213 15:28:08.139255 2476 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38c751677bda5d6519638a10347b60db-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-1-1-73ff0440f7\" (UID: \"38c751677bda5d6519638a10347b60db\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.139349 kubelet[2476]: I0213 15:28:08.139329 2476 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38c751677bda5d6519638a10347b60db-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-1-1-73ff0440f7\" (UID: \"38c751677bda5d6519638a10347b60db\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.139406 kubelet[2476]: I0213 15:28:08.139385 2476 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38c751677bda5d6519638a10347b60db-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-1-1-73ff0440f7\" (UID: \"38c751677bda5d6519638a10347b60db\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.141381 kubelet[2476]: I0213 15:28:08.141347 2476 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.141940 kubelet[2476]: E0213 15:28:08.141916 2476 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.200.94:6443/api/v1/nodes\": dial tcp 188.245.200.94:6443: connect: connection refused" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.293784 containerd[1484]: time="2025-02-13T15:28:08.293253107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-1-1-73ff0440f7,Uid:d8f1926d9a7b1f00897cc97fdec61a27,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:08.308418 containerd[1484]: time="2025-02-13T15:28:08.308333300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-1-1-73ff0440f7,Uid:38c751677bda5d6519638a10347b60db,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:08.315562 containerd[1484]: time="2025-02-13T15:28:08.315228722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-1-1-73ff0440f7,Uid:30e456874c12299e2604c8eedf7ba566,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:08.446086 kubelet[2476]: E0213 15:28:08.446042 2476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.200.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-1-73ff0440f7?timeout=10s\": dial tcp 188.245.200.94:6443: connect: connection refused" interval="800ms" Feb 13 15:28:08.546188 kubelet[2476]: I0213 15:28:08.545436 2476 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.546188 kubelet[2476]: E0213 15:28:08.545977 2476 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.200.94:6443/api/v1/nodes\": dial tcp 188.245.200.94:6443: connect: connection refused" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:08.826530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3144772511.mount: Deactivated successfully. Feb 13 15:28:08.835631 containerd[1484]: time="2025-02-13T15:28:08.835519414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:28:08.837228 containerd[1484]: time="2025-02-13T15:28:08.837172329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 15:28:08.839026 containerd[1484]: time="2025-02-13T15:28:08.838952565Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:28:08.841171 containerd[1484]: time="2025-02-13T15:28:08.841116130Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:28:08.843656 containerd[1484]: time="2025-02-13T15:28:08.843455699Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:28:08.846327 containerd[1484]: time="2025-02-13T15:28:08.846249956Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:28:08.847405 containerd[1484]: time="2025-02-13T15:28:08.847097454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:28:08.847405 containerd[1484]: time="2025-02-13T15:28:08.847326179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:28:08.849132 containerd[1484]: time="2025-02-13T15:28:08.849086455Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.703305ms" Feb 13 15:28:08.856574 containerd[1484]: time="2025-02-13T15:28:08.855048099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.626077ms" Feb 13 15:28:08.858318 containerd[1484]: time="2025-02-13T15:28:08.858212044Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.87824ms" Feb 13 15:28:08.931714 kubelet[2476]: W0213 15:28:08.931640 2476 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://188.245.200.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-1-73ff0440f7&limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:08.932163 kubelet[2476]: E0213 15:28:08.932147 2476 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://188.245.200.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-1-73ff0440f7&limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:08.980700 containerd[1484]: time="2025-02-13T15:28:08.974113244Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:08.980700 containerd[1484]: time="2025-02-13T15:28:08.974316488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:08.980700 containerd[1484]: time="2025-02-13T15:28:08.974350409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:08.980700 containerd[1484]: time="2025-02-13T15:28:08.976776219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:08.981091 containerd[1484]: time="2025-02-13T15:28:08.979733960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:08.981091 containerd[1484]: time="2025-02-13T15:28:08.979803002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:08.981091 containerd[1484]: time="2025-02-13T15:28:08.979819202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:08.981373 containerd[1484]: time="2025-02-13T15:28:08.978351491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:08.981373 containerd[1484]: time="2025-02-13T15:28:08.978426133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:08.981373 containerd[1484]: time="2025-02-13T15:28:08.978441653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:08.981373 containerd[1484]: time="2025-02-13T15:28:08.978524175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:08.985693 containerd[1484]: time="2025-02-13T15:28:08.983064309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:09.009655 systemd[1]: Started cri-containerd-4fccf78622aed89abf8ca9ee58d391c55348c756e96b21ea21fb67ab501e769c.scope - libcontainer container 4fccf78622aed89abf8ca9ee58d391c55348c756e96b21ea21fb67ab501e769c. Feb 13 15:28:09.013353 systemd[1]: Started cri-containerd-aa629ccbb098ed0fb79f6e75c65194ce1a5d862d397ead819d41c3059151ce98.scope - libcontainer container aa629ccbb098ed0fb79f6e75c65194ce1a5d862d397ead819d41c3059151ce98. Feb 13 15:28:09.019504 systemd[1]: Started cri-containerd-44d3ee01ab6a1969391fca604d8be5d93c2f062b4aa8f086b87bfabe8c61d7d2.scope - libcontainer container 44d3ee01ab6a1969391fca604d8be5d93c2f062b4aa8f086b87bfabe8c61d7d2. Feb 13 15:28:09.073374 containerd[1484]: time="2025-02-13T15:28:09.072860923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-1-1-73ff0440f7,Uid:30e456874c12299e2604c8eedf7ba566,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fccf78622aed89abf8ca9ee58d391c55348c756e96b21ea21fb67ab501e769c\"" Feb 13 15:28:09.083124 containerd[1484]: time="2025-02-13T15:28:09.082343854Z" level=info msg="CreateContainer within sandbox \"4fccf78622aed89abf8ca9ee58d391c55348c756e96b21ea21fb67ab501e769c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:28:09.087133 containerd[1484]: time="2025-02-13T15:28:09.086994518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-1-1-73ff0440f7,Uid:38c751677bda5d6519638a10347b60db,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa629ccbb098ed0fb79f6e75c65194ce1a5d862d397ead819d41c3059151ce98\"" Feb 13 15:28:09.094852 containerd[1484]: time="2025-02-13T15:28:09.094476725Z" level=info msg="CreateContainer within sandbox \"aa629ccbb098ed0fb79f6e75c65194ce1a5d862d397ead819d41c3059151ce98\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:28:09.100550 containerd[1484]: time="2025-02-13T15:28:09.100512460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-1-1-73ff0440f7,Uid:d8f1926d9a7b1f00897cc97fdec61a27,Namespace:kube-system,Attempt:0,} returns sandbox id \"44d3ee01ab6a1969391fca604d8be5d93c2f062b4aa8f086b87bfabe8c61d7d2\"" Feb 13 15:28:09.105229 containerd[1484]: time="2025-02-13T15:28:09.105104682Z" level=info msg="CreateContainer within sandbox \"44d3ee01ab6a1969391fca604d8be5d93c2f062b4aa8f086b87bfabe8c61d7d2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:28:09.115587 kubelet[2476]: W0213 15:28:09.115526 2476 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://188.245.200.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:09.115587 kubelet[2476]: E0213 15:28:09.115592 2476 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://188.245.200.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:09.118025 containerd[1484]: time="2025-02-13T15:28:09.117976129Z" level=info msg="CreateContainer within sandbox \"aa629ccbb098ed0fb79f6e75c65194ce1a5d862d397ead819d41c3059151ce98\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3d20fb4aee0cfddab6cbc4c23604d128ec7a95ad773f57db4d73293311242774\"" Feb 13 15:28:09.119287 containerd[1484]: time="2025-02-13T15:28:09.119231597Z" level=info msg="CreateContainer within sandbox \"4fccf78622aed89abf8ca9ee58d391c55348c756e96b21ea21fb67ab501e769c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"171a8fa965a64046c7c91379db0df94a942767041277b0a94289d6215ddf1294\"" Feb 13 15:28:09.120319 containerd[1484]: time="2025-02-13T15:28:09.119563245Z" level=info msg="StartContainer for \"3d20fb4aee0cfddab6cbc4c23604d128ec7a95ad773f57db4d73293311242774\"" Feb 13 15:28:09.120319 containerd[1484]: time="2025-02-13T15:28:09.119611846Z" level=info msg="StartContainer for \"171a8fa965a64046c7c91379db0df94a942767041277b0a94289d6215ddf1294\"" Feb 13 15:28:09.132910 containerd[1484]: time="2025-02-13T15:28:09.132809620Z" level=info msg="CreateContainer within sandbox \"44d3ee01ab6a1969391fca604d8be5d93c2f062b4aa8f086b87bfabe8c61d7d2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"590060629b04393ad42f36553d5cedef544df3ce5c107f4c7a857278269aa2e8\"" Feb 13 15:28:09.133871 containerd[1484]: time="2025-02-13T15:28:09.133717440Z" level=info msg="StartContainer for \"590060629b04393ad42f36553d5cedef544df3ce5c107f4c7a857278269aa2e8\"" Feb 13 15:28:09.152201 systemd[1]: Started cri-containerd-3d20fb4aee0cfddab6cbc4c23604d128ec7a95ad773f57db4d73293311242774.scope - libcontainer container 3d20fb4aee0cfddab6cbc4c23604d128ec7a95ad773f57db4d73293311242774. Feb 13 15:28:09.161681 systemd[1]: Started cri-containerd-171a8fa965a64046c7c91379db0df94a942767041277b0a94289d6215ddf1294.scope - libcontainer container 171a8fa965a64046c7c91379db0df94a942767041277b0a94289d6215ddf1294. Feb 13 15:28:09.192528 systemd[1]: Started cri-containerd-590060629b04393ad42f36553d5cedef544df3ce5c107f4c7a857278269aa2e8.scope - libcontainer container 590060629b04393ad42f36553d5cedef544df3ce5c107f4c7a857278269aa2e8. Feb 13 15:28:09.229875 containerd[1484]: time="2025-02-13T15:28:09.229816064Z" level=info msg="StartContainer for \"3d20fb4aee0cfddab6cbc4c23604d128ec7a95ad773f57db4d73293311242774\" returns successfully" Feb 13 15:28:09.247458 kubelet[2476]: E0213 15:28:09.247148 2476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.200.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-1-73ff0440f7?timeout=10s\": dial tcp 188.245.200.94:6443: connect: connection refused" interval="1.6s" Feb 13 15:28:09.255973 containerd[1484]: time="2025-02-13T15:28:09.255923846Z" level=info msg="StartContainer for \"171a8fa965a64046c7c91379db0df94a942767041277b0a94289d6215ddf1294\" returns successfully" Feb 13 15:28:09.268887 containerd[1484]: time="2025-02-13T15:28:09.268822094Z" level=info msg="StartContainer for \"590060629b04393ad42f36553d5cedef544df3ce5c107f4c7a857278269aa2e8\" returns successfully" Feb 13 15:28:09.347279 kubelet[2476]: W0213 15:28:09.347134 2476 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://188.245.200.94:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:09.348679 kubelet[2476]: E0213 15:28:09.348463 2476 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://188.245.200.94:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 188.245.200.94:6443: connect: connection refused Feb 13 15:28:09.352034 kubelet[2476]: I0213 15:28:09.351882 2476 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:09.352493 kubelet[2476]: E0213 15:28:09.352477 2476 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://188.245.200.94:6443/api/v1/nodes\": dial tcp 188.245.200.94:6443: connect: connection refused" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:10.955521 kubelet[2476]: I0213 15:28:10.955279 2476 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:11.201303 kubelet[2476]: I0213 15:28:11.200019 2476 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:11.829074 kubelet[2476]: I0213 15:28:11.828725 2476 apiserver.go:52] "Watching apiserver" Feb 13 15:28:11.837726 kubelet[2476]: I0213 15:28:11.837646 2476 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:28:13.852770 systemd[1]: Reloading requested from client PID 2744 ('systemctl') (unit session-7.scope)... Feb 13 15:28:13.852798 systemd[1]: Reloading... Feb 13 15:28:13.969739 zram_generator::config[2785]: No configuration found. Feb 13 15:28:14.070339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:28:14.154370 systemd[1]: Reloading finished in 300 ms. Feb 13 15:28:14.194682 kubelet[2476]: I0213 15:28:14.194597 2476 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:28:14.194874 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:14.208750 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:28:14.208981 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:14.209035 systemd[1]: kubelet.service: Consumed 1.544s CPU time, 111.6M memory peak, 0B memory swap peak. Feb 13 15:28:14.216714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:28:14.339466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:28:14.339593 (kubelet)[2829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:28:14.397432 kubelet[2829]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:28:14.397806 kubelet[2829]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:28:14.397853 kubelet[2829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:28:14.398064 kubelet[2829]: I0213 15:28:14.398016 2829 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:28:14.404119 kubelet[2829]: I0213 15:28:14.404076 2829 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:28:14.406394 kubelet[2829]: I0213 15:28:14.406299 2829 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:28:14.406892 kubelet[2829]: I0213 15:28:14.406862 2829 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:28:14.409101 kubelet[2829]: I0213 15:28:14.409057 2829 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:28:14.415064 kubelet[2829]: I0213 15:28:14.414992 2829 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:28:14.426005 kubelet[2829]: I0213 15:28:14.425542 2829 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:28:14.426005 kubelet[2829]: I0213 15:28:14.425737 2829 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:28:14.426279 kubelet[2829]: I0213 15:28:14.426235 2829 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:28:14.426412 kubelet[2829]: I0213 15:28:14.426400 2829 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:28:14.426654 kubelet[2829]: I0213 15:28:14.426458 2829 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:28:14.426654 kubelet[2829]: I0213 15:28:14.426495 2829 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:28:14.426654 kubelet[2829]: I0213 15:28:14.426618 2829 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:28:14.426777 kubelet[2829]: I0213 15:28:14.426766 2829 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:28:14.426859 kubelet[2829]: I0213 15:28:14.426851 2829 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:28:14.426958 kubelet[2829]: I0213 15:28:14.426941 2829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:28:14.428017 kubelet[2829]: I0213 15:28:14.427983 2829 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:28:14.428191 kubelet[2829]: I0213 15:28:14.428151 2829 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:28:14.433412 kubelet[2829]: I0213 15:28:14.432601 2829 server.go:1256] "Started kubelet" Feb 13 15:28:14.437709 kubelet[2829]: I0213 15:28:14.437370 2829 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:28:14.446363 kubelet[2829]: I0213 15:28:14.446334 2829 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:28:14.448750 kubelet[2829]: I0213 15:28:14.448718 2829 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:28:14.449825 kubelet[2829]: I0213 15:28:14.449806 2829 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:28:14.467103 kubelet[2829]: I0213 15:28:14.453185 2829 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:28:14.477509 kubelet[2829]: I0213 15:28:14.453352 2829 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:28:14.477509 kubelet[2829]: I0213 15:28:14.477261 2829 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:28:14.477509 kubelet[2829]: I0213 15:28:14.477352 2829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:28:14.483359 kubelet[2829]: I0213 15:28:14.483320 2829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:28:14.483500 kubelet[2829]: I0213 15:28:14.483381 2829 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:28:14.483500 kubelet[2829]: I0213 15:28:14.483405 2829 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:28:14.483500 kubelet[2829]: E0213 15:28:14.483461 2829 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:28:14.484679 kubelet[2829]: I0213 15:28:14.454746 2829 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:28:14.494913 kubelet[2829]: I0213 15:28:14.493854 2829 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:28:14.494913 kubelet[2829]: I0213 15:28:14.493882 2829 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:28:14.494913 kubelet[2829]: I0213 15:28:14.493995 2829 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:28:14.559005 kubelet[2829]: I0213 15:28:14.558973 2829 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.568635 kubelet[2829]: I0213 15:28:14.568606 2829 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:28:14.568635 kubelet[2829]: I0213 15:28:14.568628 2829 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:28:14.568635 kubelet[2829]: I0213 15:28:14.568648 2829 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:28:14.568815 kubelet[2829]: I0213 15:28:14.568797 2829 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:28:14.568840 kubelet[2829]: I0213 15:28:14.568817 2829 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:28:14.568840 kubelet[2829]: I0213 15:28:14.568824 2829 policy_none.go:49] "None policy: Start" Feb 13 15:28:14.571499 kubelet[2829]: I0213 15:28:14.571460 2829 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:28:14.571499 kubelet[2829]: I0213 15:28:14.571491 2829 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:28:14.571739 kubelet[2829]: I0213 15:28:14.571662 2829 state_mem.go:75] "Updated machine memory state" Feb 13 15:28:14.573093 kubelet[2829]: I0213 15:28:14.572994 2829 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.573093 kubelet[2829]: I0213 15:28:14.573071 2829 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.581852 kubelet[2829]: I0213 15:28:14.580995 2829 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:28:14.581852 kubelet[2829]: I0213 15:28:14.581229 2829 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:28:14.585066 kubelet[2829]: I0213 15:28:14.585030 2829 topology_manager.go:215] "Topology Admit Handler" podUID="38c751677bda5d6519638a10347b60db" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.585183 kubelet[2829]: I0213 15:28:14.585114 2829 topology_manager.go:215] "Topology Admit Handler" podUID="30e456874c12299e2604c8eedf7ba566" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.585183 kubelet[2829]: I0213 15:28:14.585166 2829 topology_manager.go:215] "Topology Admit Handler" podUID="d8f1926d9a7b1f00897cc97fdec61a27" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.605287 kubelet[2829]: E0213 15:28:14.604119 2829 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-1-1-73ff0440f7\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.678258 kubelet[2829]: I0213 15:28:14.677806 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d8f1926d9a7b1f00897cc97fdec61a27-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-1-1-73ff0440f7\" (UID: \"d8f1926d9a7b1f00897cc97fdec61a27\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.678258 kubelet[2829]: I0213 15:28:14.677890 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/38c751677bda5d6519638a10347b60db-ca-certs\") pod \"kube-controller-manager-ci-4152-2-1-1-73ff0440f7\" (UID: \"38c751677bda5d6519638a10347b60db\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.678258 kubelet[2829]: I0213 15:28:14.677969 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/38c751677bda5d6519638a10347b60db-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-1-1-73ff0440f7\" (UID: \"38c751677bda5d6519638a10347b60db\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.678258 kubelet[2829]: I0213 15:28:14.678014 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/38c751677bda5d6519638a10347b60db-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-1-1-73ff0440f7\" (UID: \"38c751677bda5d6519638a10347b60db\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.678258 kubelet[2829]: I0213 15:28:14.678054 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d8f1926d9a7b1f00897cc97fdec61a27-k8s-certs\") pod \"kube-apiserver-ci-4152-2-1-1-73ff0440f7\" (UID: \"d8f1926d9a7b1f00897cc97fdec61a27\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.678681 kubelet[2829]: I0213 15:28:14.678094 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/38c751677bda5d6519638a10347b60db-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-1-1-73ff0440f7\" (UID: \"38c751677bda5d6519638a10347b60db\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.678681 kubelet[2829]: I0213 15:28:14.678136 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/38c751677bda5d6519638a10347b60db-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-1-1-73ff0440f7\" (UID: \"38c751677bda5d6519638a10347b60db\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.678681 kubelet[2829]: I0213 15:28:14.678174 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30e456874c12299e2604c8eedf7ba566-kubeconfig\") pod \"kube-scheduler-ci-4152-2-1-1-73ff0440f7\" (UID: \"30e456874c12299e2604c8eedf7ba566\") " pod="kube-system/kube-scheduler-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:14.678681 kubelet[2829]: I0213 15:28:14.678214 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d8f1926d9a7b1f00897cc97fdec61a27-ca-certs\") pod \"kube-apiserver-ci-4152-2-1-1-73ff0440f7\" (UID: \"d8f1926d9a7b1f00897cc97fdec61a27\") " pod="kube-system/kube-apiserver-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:15.428659 kubelet[2829]: I0213 15:28:15.428606 2829 apiserver.go:52] "Watching apiserver" Feb 13 15:28:15.477428 kubelet[2829]: I0213 15:28:15.477373 2829 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:28:15.562138 kubelet[2829]: E0213 15:28:15.562097 2829 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-1-1-73ff0440f7\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:15.644770 kubelet[2829]: I0213 15:28:15.644711 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-1-1-73ff0440f7" podStartSLOduration=2.644663454 podStartE2EDuration="2.644663454s" podCreationTimestamp="2025-02-13 15:28:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:15.598509467 +0000 UTC m=+1.252132898" watchObservedRunningTime="2025-02-13 15:28:15.644663454 +0000 UTC m=+1.298286885" Feb 13 15:28:15.687554 kubelet[2829]: I0213 15:28:15.687314 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-1-1-73ff0440f7" podStartSLOduration=1.687259932 podStartE2EDuration="1.687259932s" podCreationTimestamp="2025-02-13 15:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:15.65549947 +0000 UTC m=+1.309122901" watchObservedRunningTime="2025-02-13 15:28:15.687259932 +0000 UTC m=+1.340883323" Feb 13 15:28:19.381922 sudo[1887]: pam_unix(sudo:session): session closed for user root Feb 13 15:28:19.541406 sshd[1886]: Connection closed by 139.178.89.65 port 50572 Feb 13 15:28:19.542091 sshd-session[1884]: pam_unix(sshd:session): session closed for user core Feb 13 15:28:19.546348 systemd[1]: sshd@6-188.245.200.94:22-139.178.89.65:50572.service: Deactivated successfully. Feb 13 15:28:19.548716 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:28:19.548918 systemd[1]: session-7.scope: Consumed 6.987s CPU time, 186.8M memory peak, 0B memory swap peak. Feb 13 15:28:19.551083 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:28:19.552969 systemd-logind[1461]: Removed session 7. Feb 13 15:28:26.040997 kubelet[2829]: I0213 15:28:26.040389 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-1-1-73ff0440f7" podStartSLOduration=12.040322454 podStartE2EDuration="12.040322454s" podCreationTimestamp="2025-02-13 15:28:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:15.695011612 +0000 UTC m=+1.348635043" watchObservedRunningTime="2025-02-13 15:28:26.040322454 +0000 UTC m=+11.693946045" Feb 13 15:28:28.753215 kubelet[2829]: I0213 15:28:28.753004 2829 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:28:28.753722 containerd[1484]: time="2025-02-13T15:28:28.753548507Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:28:28.755335 kubelet[2829]: I0213 15:28:28.755019 2829 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:28:29.645891 kubelet[2829]: I0213 15:28:29.645124 2829 topology_manager.go:215] "Topology Admit Handler" podUID="e9c41e0d-9a57-423f-b0a3-6e93124d4853" podNamespace="kube-system" podName="kube-proxy-zgrwm" Feb 13 15:28:29.652646 kubelet[2829]: W0213 15:28:29.652465 2829 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4152-2-1-1-73ff0440f7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-1-1-73ff0440f7' and this object Feb 13 15:28:29.653158 kubelet[2829]: E0213 15:28:29.653142 2829 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-4152-2-1-1-73ff0440f7" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-1-1-73ff0440f7' and this object Feb 13 15:28:29.659966 systemd[1]: Created slice kubepods-besteffort-pode9c41e0d_9a57_423f_b0a3_6e93124d4853.slice - libcontainer container kubepods-besteffort-pode9c41e0d_9a57_423f_b0a3_6e93124d4853.slice. Feb 13 15:28:29.681076 kubelet[2829]: I0213 15:28:29.681022 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9f6d\" (UniqueName: \"kubernetes.io/projected/e9c41e0d-9a57-423f-b0a3-6e93124d4853-kube-api-access-v9f6d\") pod \"kube-proxy-zgrwm\" (UID: \"e9c41e0d-9a57-423f-b0a3-6e93124d4853\") " pod="kube-system/kube-proxy-zgrwm" Feb 13 15:28:29.681076 kubelet[2829]: I0213 15:28:29.681087 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9c41e0d-9a57-423f-b0a3-6e93124d4853-xtables-lock\") pod \"kube-proxy-zgrwm\" (UID: \"e9c41e0d-9a57-423f-b0a3-6e93124d4853\") " pod="kube-system/kube-proxy-zgrwm" Feb 13 15:28:29.681506 kubelet[2829]: I0213 15:28:29.681122 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9c41e0d-9a57-423f-b0a3-6e93124d4853-lib-modules\") pod \"kube-proxy-zgrwm\" (UID: \"e9c41e0d-9a57-423f-b0a3-6e93124d4853\") " pod="kube-system/kube-proxy-zgrwm" Feb 13 15:28:29.681506 kubelet[2829]: I0213 15:28:29.681158 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9c41e0d-9a57-423f-b0a3-6e93124d4853-kube-proxy\") pod \"kube-proxy-zgrwm\" (UID: \"e9c41e0d-9a57-423f-b0a3-6e93124d4853\") " pod="kube-system/kube-proxy-zgrwm" Feb 13 15:28:29.813298 kubelet[2829]: I0213 15:28:29.810350 2829 topology_manager.go:215] "Topology Admit Handler" podUID="1c4dfc8c-6fae-40a6-8bc8-73f1ef2b24f3" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-4dlcr" Feb 13 15:28:29.826634 systemd[1]: Created slice kubepods-besteffort-pod1c4dfc8c_6fae_40a6_8bc8_73f1ef2b24f3.slice - libcontainer container kubepods-besteffort-pod1c4dfc8c_6fae_40a6_8bc8_73f1ef2b24f3.slice. Feb 13 15:28:29.882622 kubelet[2829]: I0213 15:28:29.882486 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h77wc\" (UniqueName: \"kubernetes.io/projected/1c4dfc8c-6fae-40a6-8bc8-73f1ef2b24f3-kube-api-access-h77wc\") pod \"tigera-operator-c7ccbd65-4dlcr\" (UID: \"1c4dfc8c-6fae-40a6-8bc8-73f1ef2b24f3\") " pod="tigera-operator/tigera-operator-c7ccbd65-4dlcr" Feb 13 15:28:29.882960 kubelet[2829]: I0213 15:28:29.882867 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1c4dfc8c-6fae-40a6-8bc8-73f1ef2b24f3-var-lib-calico\") pod \"tigera-operator-c7ccbd65-4dlcr\" (UID: \"1c4dfc8c-6fae-40a6-8bc8-73f1ef2b24f3\") " pod="tigera-operator/tigera-operator-c7ccbd65-4dlcr" Feb 13 15:28:30.134071 containerd[1484]: time="2025-02-13T15:28:30.133993689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-4dlcr,Uid:1c4dfc8c-6fae-40a6-8bc8-73f1ef2b24f3,Namespace:tigera-operator,Attempt:0,}" Feb 13 15:28:30.165684 containerd[1484]: time="2025-02-13T15:28:30.165560157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:30.165903 containerd[1484]: time="2025-02-13T15:28:30.165774927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:30.165903 containerd[1484]: time="2025-02-13T15:28:30.165806809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:30.166122 containerd[1484]: time="2025-02-13T15:28:30.165897813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:30.186494 systemd[1]: Started cri-containerd-f9d93d9cf134b4d5909feacd60a4a6c4195695a2d8a6f4b5c8faf879cb49305d.scope - libcontainer container f9d93d9cf134b4d5909feacd60a4a6c4195695a2d8a6f4b5c8faf879cb49305d. Feb 13 15:28:30.223973 containerd[1484]: time="2025-02-13T15:28:30.223713501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-4dlcr,Uid:1c4dfc8c-6fae-40a6-8bc8-73f1ef2b24f3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f9d93d9cf134b4d5909feacd60a4a6c4195695a2d8a6f4b5c8faf879cb49305d\"" Feb 13 15:28:30.228655 containerd[1484]: time="2025-02-13T15:28:30.228383278Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 15:28:30.782355 kubelet[2829]: E0213 15:28:30.782204 2829 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:28:30.782559 kubelet[2829]: E0213 15:28:30.782431 2829 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e9c41e0d-9a57-423f-b0a3-6e93124d4853-kube-proxy podName:e9c41e0d-9a57-423f-b0a3-6e93124d4853 nodeName:}" failed. No retries permitted until 2025-02-13 15:28:31.282393275 +0000 UTC m=+16.936016746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e9c41e0d-9a57-423f-b0a3-6e93124d4853-kube-proxy") pod "kube-proxy-zgrwm" (UID: "e9c41e0d-9a57-423f-b0a3-6e93124d4853") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:28:31.471026 containerd[1484]: time="2025-02-13T15:28:31.470558643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zgrwm,Uid:e9c41e0d-9a57-423f-b0a3-6e93124d4853,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:31.499899 containerd[1484]: time="2025-02-13T15:28:31.499472650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:31.500696 containerd[1484]: time="2025-02-13T15:28:31.500477618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:31.500696 containerd[1484]: time="2025-02-13T15:28:31.500512820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:31.500696 containerd[1484]: time="2025-02-13T15:28:31.500604424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:31.529576 systemd[1]: Started cri-containerd-aad1e33a7cbad98228350e103f4d4cf09aad5950f1d3c7868c8a35c33896b678.scope - libcontainer container aad1e33a7cbad98228350e103f4d4cf09aad5950f1d3c7868c8a35c33896b678. Feb 13 15:28:31.557409 containerd[1484]: time="2025-02-13T15:28:31.557158978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zgrwm,Uid:e9c41e0d-9a57-423f-b0a3-6e93124d4853,Namespace:kube-system,Attempt:0,} returns sandbox id \"aad1e33a7cbad98228350e103f4d4cf09aad5950f1d3c7868c8a35c33896b678\"" Feb 13 15:28:31.562113 containerd[1484]: time="2025-02-13T15:28:31.562001127Z" level=info msg="CreateContainer within sandbox \"aad1e33a7cbad98228350e103f4d4cf09aad5950f1d3c7868c8a35c33896b678\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:28:31.592681 containerd[1484]: time="2025-02-13T15:28:31.592592854Z" level=info msg="CreateContainer within sandbox \"aad1e33a7cbad98228350e103f4d4cf09aad5950f1d3c7868c8a35c33896b678\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"24674852f82b3ffbac6e1e45167183e3944e9f593204a57933957bc85f21352b\"" Feb 13 15:28:31.595526 containerd[1484]: time="2025-02-13T15:28:31.595471230Z" level=info msg="StartContainer for \"24674852f82b3ffbac6e1e45167183e3944e9f593204a57933957bc85f21352b\"" Feb 13 15:28:31.638546 systemd[1]: Started cri-containerd-24674852f82b3ffbac6e1e45167183e3944e9f593204a57933957bc85f21352b.scope - libcontainer container 24674852f82b3ffbac6e1e45167183e3944e9f593204a57933957bc85f21352b. Feb 13 15:28:31.684421 containerd[1484]: time="2025-02-13T15:28:31.684108222Z" level=info msg="StartContainer for \"24674852f82b3ffbac6e1e45167183e3944e9f593204a57933957bc85f21352b\" returns successfully" Feb 13 15:28:32.250629 containerd[1484]: time="2025-02-13T15:28:32.250493118Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:32.251545 containerd[1484]: time="2025-02-13T15:28:32.251380081Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Feb 13 15:28:32.252429 containerd[1484]: time="2025-02-13T15:28:32.252365728Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:32.255306 containerd[1484]: time="2025-02-13T15:28:32.254966933Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:32.256124 containerd[1484]: time="2025-02-13T15:28:32.256087427Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.027657948s" Feb 13 15:28:32.256230 containerd[1484]: time="2025-02-13T15:28:32.256214113Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Feb 13 15:28:32.260610 containerd[1484]: time="2025-02-13T15:28:32.260470158Z" level=info msg="CreateContainer within sandbox \"f9d93d9cf134b4d5909feacd60a4a6c4195695a2d8a6f4b5c8faf879cb49305d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 15:28:32.292776 containerd[1484]: time="2025-02-13T15:28:32.292683346Z" level=info msg="CreateContainer within sandbox \"f9d93d9cf134b4d5909feacd60a4a6c4195695a2d8a6f4b5c8faf879cb49305d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8\"" Feb 13 15:28:32.294558 containerd[1484]: time="2025-02-13T15:28:32.293490985Z" level=info msg="StartContainer for \"5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8\"" Feb 13 15:28:32.326556 systemd[1]: Started cri-containerd-5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8.scope - libcontainer container 5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8. Feb 13 15:28:32.361058 containerd[1484]: time="2025-02-13T15:28:32.361007150Z" level=info msg="StartContainer for \"5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8\" returns successfully" Feb 13 15:28:32.625356 kubelet[2829]: I0213 15:28:32.622973 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-4dlcr" podStartSLOduration=1.593722277 podStartE2EDuration="3.622925578s" podCreationTimestamp="2025-02-13 15:28:29 +0000 UTC" firstStartedPulling="2025-02-13 15:28:30.227640443 +0000 UTC m=+15.881263874" lastFinishedPulling="2025-02-13 15:28:32.256843784 +0000 UTC m=+17.910467175" observedRunningTime="2025-02-13 15:28:32.60903059 +0000 UTC m=+18.262654021" watchObservedRunningTime="2025-02-13 15:28:32.622925578 +0000 UTC m=+18.276549009" Feb 13 15:28:37.308532 kubelet[2829]: I0213 15:28:37.308478 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zgrwm" podStartSLOduration=8.308434102 podStartE2EDuration="8.308434102s" podCreationTimestamp="2025-02-13 15:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:32.62484391 +0000 UTC m=+18.278467381" watchObservedRunningTime="2025-02-13 15:28:37.308434102 +0000 UTC m=+22.962057533" Feb 13 15:28:37.308946 kubelet[2829]: I0213 15:28:37.308713 2829 topology_manager.go:215] "Topology Admit Handler" podUID="b04ff83c-874c-49d7-b8d7-8eb82be18767" podNamespace="calico-system" podName="calico-typha-7cc7468c55-j2gm9" Feb 13 15:28:37.318591 systemd[1]: Created slice kubepods-besteffort-podb04ff83c_874c_49d7_b8d7_8eb82be18767.slice - libcontainer container kubepods-besteffort-podb04ff83c_874c_49d7_b8d7_8eb82be18767.slice. Feb 13 15:28:37.335464 kubelet[2829]: I0213 15:28:37.333204 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b04ff83c-874c-49d7-b8d7-8eb82be18767-typha-certs\") pod \"calico-typha-7cc7468c55-j2gm9\" (UID: \"b04ff83c-874c-49d7-b8d7-8eb82be18767\") " pod="calico-system/calico-typha-7cc7468c55-j2gm9" Feb 13 15:28:37.335464 kubelet[2829]: I0213 15:28:37.333262 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ff83c-874c-49d7-b8d7-8eb82be18767-tigera-ca-bundle\") pod \"calico-typha-7cc7468c55-j2gm9\" (UID: \"b04ff83c-874c-49d7-b8d7-8eb82be18767\") " pod="calico-system/calico-typha-7cc7468c55-j2gm9" Feb 13 15:28:37.335464 kubelet[2829]: I0213 15:28:37.333301 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxb99\" (UniqueName: \"kubernetes.io/projected/b04ff83c-874c-49d7-b8d7-8eb82be18767-kube-api-access-gxb99\") pod \"calico-typha-7cc7468c55-j2gm9\" (UID: \"b04ff83c-874c-49d7-b8d7-8eb82be18767\") " pod="calico-system/calico-typha-7cc7468c55-j2gm9" Feb 13 15:28:37.483886 kubelet[2829]: I0213 15:28:37.483841 2829 topology_manager.go:215] "Topology Admit Handler" podUID="8587070e-a876-4dbd-845f-1ab8e43efec8" podNamespace="calico-system" podName="calico-node-rg4mg" Feb 13 15:28:37.493772 systemd[1]: Created slice kubepods-besteffort-pod8587070e_a876_4dbd_845f_1ab8e43efec8.slice - libcontainer container kubepods-besteffort-pod8587070e_a876_4dbd_845f_1ab8e43efec8.slice. Feb 13 15:28:37.536082 kubelet[2829]: I0213 15:28:37.535672 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8587070e-a876-4dbd-845f-1ab8e43efec8-xtables-lock\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.536536 kubelet[2829]: I0213 15:28:37.536510 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8587070e-a876-4dbd-845f-1ab8e43efec8-policysync\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.537349 kubelet[2829]: I0213 15:28:37.537330 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8587070e-a876-4dbd-845f-1ab8e43efec8-var-run-calico\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.537489 kubelet[2829]: I0213 15:28:37.537470 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8587070e-a876-4dbd-845f-1ab8e43efec8-var-lib-calico\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.537613 kubelet[2829]: I0213 15:28:37.537582 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8587070e-a876-4dbd-845f-1ab8e43efec8-cni-log-dir\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.537732 kubelet[2829]: I0213 15:28:37.537722 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74glz\" (UniqueName: \"kubernetes.io/projected/8587070e-a876-4dbd-845f-1ab8e43efec8-kube-api-access-74glz\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.537943 kubelet[2829]: I0213 15:28:37.537930 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8587070e-a876-4dbd-845f-1ab8e43efec8-lib-modules\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.538084 kubelet[2829]: I0213 15:28:37.538067 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8587070e-a876-4dbd-845f-1ab8e43efec8-cni-bin-dir\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.538178 kubelet[2829]: I0213 15:28:37.538169 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8587070e-a876-4dbd-845f-1ab8e43efec8-cni-net-dir\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.538316 kubelet[2829]: I0213 15:28:37.538306 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8587070e-a876-4dbd-845f-1ab8e43efec8-tigera-ca-bundle\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.538426 kubelet[2829]: I0213 15:28:37.538416 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8587070e-a876-4dbd-845f-1ab8e43efec8-node-certs\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.538599 kubelet[2829]: I0213 15:28:37.538536 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8587070e-a876-4dbd-845f-1ab8e43efec8-flexvol-driver-host\") pod \"calico-node-rg4mg\" (UID: \"8587070e-a876-4dbd-845f-1ab8e43efec8\") " pod="calico-system/calico-node-rg4mg" Feb 13 15:28:37.610377 kubelet[2829]: I0213 15:28:37.609453 2829 topology_manager.go:215] "Topology Admit Handler" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" podNamespace="calico-system" podName="csi-node-driver-ngp8v" Feb 13 15:28:37.610377 kubelet[2829]: E0213 15:28:37.609739 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngp8v" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" Feb 13 15:28:37.625862 containerd[1484]: time="2025-02-13T15:28:37.625801831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cc7468c55-j2gm9,Uid:b04ff83c-874c-49d7-b8d7-8eb82be18767,Namespace:calico-system,Attempt:0,}" Feb 13 15:28:37.639723 kubelet[2829]: I0213 15:28:37.639684 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5tb9\" (UniqueName: \"kubernetes.io/projected/cc578b4f-a600-4134-9ea7-e3c0400423a8-kube-api-access-d5tb9\") pod \"csi-node-driver-ngp8v\" (UID: \"cc578b4f-a600-4134-9ea7-e3c0400423a8\") " pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:37.643297 kubelet[2829]: I0213 15:28:37.641001 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cc578b4f-a600-4134-9ea7-e3c0400423a8-registration-dir\") pod \"csi-node-driver-ngp8v\" (UID: \"cc578b4f-a600-4134-9ea7-e3c0400423a8\") " pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:37.643297 kubelet[2829]: I0213 15:28:37.641059 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cc578b4f-a600-4134-9ea7-e3c0400423a8-varrun\") pod \"csi-node-driver-ngp8v\" (UID: \"cc578b4f-a600-4134-9ea7-e3c0400423a8\") " pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:37.643297 kubelet[2829]: I0213 15:28:37.642303 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc578b4f-a600-4134-9ea7-e3c0400423a8-kubelet-dir\") pod \"csi-node-driver-ngp8v\" (UID: \"cc578b4f-a600-4134-9ea7-e3c0400423a8\") " pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:37.643297 kubelet[2829]: I0213 15:28:37.642344 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cc578b4f-a600-4134-9ea7-e3c0400423a8-socket-dir\") pod \"csi-node-driver-ngp8v\" (UID: \"cc578b4f-a600-4134-9ea7-e3c0400423a8\") " pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:37.660348 kubelet[2829]: E0213 15:28:37.659740 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.660348 kubelet[2829]: W0213 15:28:37.659779 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.660348 kubelet[2829]: E0213 15:28:37.659806 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.670690 containerd[1484]: time="2025-02-13T15:28:37.670568580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:37.670829 containerd[1484]: time="2025-02-13T15:28:37.670745229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:37.670829 containerd[1484]: time="2025-02-13T15:28:37.670765470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:37.671395 containerd[1484]: time="2025-02-13T15:28:37.671345020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:37.695383 kubelet[2829]: E0213 15:28:37.694156 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.695383 kubelet[2829]: W0213 15:28:37.694180 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.695383 kubelet[2829]: E0213 15:28:37.694206 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.703754 systemd[1]: Started cri-containerd-35cd7fefdd391ecd0b97edba46e3b93a5236861fc89f8d0c154ee2f41d1a5a5f.scope - libcontainer container 35cd7fefdd391ecd0b97edba46e3b93a5236861fc89f8d0c154ee2f41d1a5a5f. Feb 13 15:28:37.745758 kubelet[2829]: E0213 15:28:37.745724 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.746114 kubelet[2829]: W0213 15:28:37.745925 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.746114 kubelet[2829]: E0213 15:28:37.745959 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.746365 kubelet[2829]: E0213 15:28:37.746351 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.746577 kubelet[2829]: W0213 15:28:37.746430 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.746577 kubelet[2829]: E0213 15:28:37.746461 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.747353 kubelet[2829]: E0213 15:28:37.747180 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.747353 kubelet[2829]: W0213 15:28:37.747197 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.747353 kubelet[2829]: E0213 15:28:37.747219 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.747929 kubelet[2829]: E0213 15:28:37.747912 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.748115 kubelet[2829]: W0213 15:28:37.748098 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.750059 kubelet[2829]: E0213 15:28:37.748217 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.750351 kubelet[2829]: E0213 15:28:37.750214 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.750351 kubelet[2829]: W0213 15:28:37.750232 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.750351 kubelet[2829]: E0213 15:28:37.750303 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.750737 kubelet[2829]: E0213 15:28:37.750585 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.750737 kubelet[2829]: W0213 15:28:37.750647 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.750737 kubelet[2829]: E0213 15:28:37.750691 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.751056 kubelet[2829]: E0213 15:28:37.750961 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.751056 kubelet[2829]: W0213 15:28:37.750975 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.751056 kubelet[2829]: E0213 15:28:37.751038 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.751297 kubelet[2829]: E0213 15:28:37.751282 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.751441 kubelet[2829]: W0213 15:28:37.751360 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.751441 kubelet[2829]: E0213 15:28:37.751431 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.751736 kubelet[2829]: E0213 15:28:37.751642 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.751736 kubelet[2829]: W0213 15:28:37.751657 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.751736 kubelet[2829]: E0213 15:28:37.751688 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.751915 kubelet[2829]: E0213 15:28:37.751901 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.751981 kubelet[2829]: W0213 15:28:37.751969 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.752110 kubelet[2829]: E0213 15:28:37.752063 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.752394 kubelet[2829]: E0213 15:28:37.752291 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.752394 kubelet[2829]: W0213 15:28:37.752305 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.752394 kubelet[2829]: E0213 15:28:37.752363 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.752809 kubelet[2829]: E0213 15:28:37.752619 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.752809 kubelet[2829]: W0213 15:28:37.752633 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.752809 kubelet[2829]: E0213 15:28:37.752669 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.753237 kubelet[2829]: E0213 15:28:37.753131 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.753662 kubelet[2829]: W0213 15:28:37.753421 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.753662 kubelet[2829]: E0213 15:28:37.753476 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.754227 kubelet[2829]: E0213 15:28:37.754024 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.754227 kubelet[2829]: W0213 15:28:37.754042 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.754227 kubelet[2829]: E0213 15:28:37.754082 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.754980 kubelet[2829]: E0213 15:28:37.754781 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.754980 kubelet[2829]: W0213 15:28:37.754801 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.754980 kubelet[2829]: E0213 15:28:37.754850 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.755613 kubelet[2829]: E0213 15:28:37.755404 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.755613 kubelet[2829]: W0213 15:28:37.755422 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.755613 kubelet[2829]: E0213 15:28:37.755458 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.756135 kubelet[2829]: E0213 15:28:37.755986 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.757203 kubelet[2829]: W0213 15:28:37.756227 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.757203 kubelet[2829]: E0213 15:28:37.756324 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.757581 kubelet[2829]: E0213 15:28:37.757481 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.757581 kubelet[2829]: W0213 15:28:37.757499 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.757581 kubelet[2829]: E0213 15:28:37.757552 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.757925 kubelet[2829]: E0213 15:28:37.757828 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.757925 kubelet[2829]: W0213 15:28:37.757845 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.757925 kubelet[2829]: E0213 15:28:37.757906 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.758151 kubelet[2829]: E0213 15:28:37.758136 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.758470 kubelet[2829]: W0213 15:28:37.758214 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.758470 kubelet[2829]: E0213 15:28:37.758289 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.761314 kubelet[2829]: E0213 15:28:37.759436 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.761759 kubelet[2829]: W0213 15:28:37.761482 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.761759 kubelet[2829]: E0213 15:28:37.761552 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.762038 kubelet[2829]: E0213 15:28:37.761933 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.762038 kubelet[2829]: W0213 15:28:37.761949 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.762038 kubelet[2829]: E0213 15:28:37.761991 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.762299 kubelet[2829]: E0213 15:28:37.762264 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.762481 kubelet[2829]: W0213 15:28:37.762360 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.762581 kubelet[2829]: E0213 15:28:37.762537 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.762889 kubelet[2829]: E0213 15:28:37.762751 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.762889 kubelet[2829]: W0213 15:28:37.762765 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.762889 kubelet[2829]: E0213 15:28:37.762786 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.763060 kubelet[2829]: E0213 15:28:37.763046 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.763117 kubelet[2829]: W0213 15:28:37.763106 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.763173 kubelet[2829]: E0213 15:28:37.763165 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.779376 kubelet[2829]: E0213 15:28:37.779346 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:37.779946 kubelet[2829]: W0213 15:28:37.779839 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:37.779946 kubelet[2829]: E0213 15:28:37.779891 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:37.801080 containerd[1484]: time="2025-02-13T15:28:37.800923063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rg4mg,Uid:8587070e-a876-4dbd-845f-1ab8e43efec8,Namespace:calico-system,Attempt:0,}" Feb 13 15:28:37.807534 containerd[1484]: time="2025-02-13T15:28:37.805898759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cc7468c55-j2gm9,Uid:b04ff83c-874c-49d7-b8d7-8eb82be18767,Namespace:calico-system,Attempt:0,} returns sandbox id \"35cd7fefdd391ecd0b97edba46e3b93a5236861fc89f8d0c154ee2f41d1a5a5f\"" Feb 13 15:28:37.811550 containerd[1484]: time="2025-02-13T15:28:37.811131549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 15:28:37.845763 containerd[1484]: time="2025-02-13T15:28:37.840185848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:37.845763 containerd[1484]: time="2025-02-13T15:28:37.840374297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:37.845763 containerd[1484]: time="2025-02-13T15:28:37.840394778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:37.845763 containerd[1484]: time="2025-02-13T15:28:37.844254178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:37.868524 systemd[1]: Started cri-containerd-f8f7fec30bc61b38f5083bd7fa1388b2e4893e8fad107074d854aa55e4031287.scope - libcontainer container f8f7fec30bc61b38f5083bd7fa1388b2e4893e8fad107074d854aa55e4031287. Feb 13 15:28:37.901889 containerd[1484]: time="2025-02-13T15:28:37.901842508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rg4mg,Uid:8587070e-a876-4dbd-845f-1ab8e43efec8,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8f7fec30bc61b38f5083bd7fa1388b2e4893e8fad107074d854aa55e4031287\"" Feb 13 15:28:39.328106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2261589685.mount: Deactivated successfully. Feb 13 15:28:39.484487 kubelet[2829]: E0213 15:28:39.484436 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngp8v" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" Feb 13 15:28:39.976316 containerd[1484]: time="2025-02-13T15:28:39.975605487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:39.977289 containerd[1484]: time="2025-02-13T15:28:39.977207572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Feb 13 15:28:39.978369 containerd[1484]: time="2025-02-13T15:28:39.978319511Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:39.982195 containerd[1484]: time="2025-02-13T15:28:39.982128392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:39.983403 containerd[1484]: time="2025-02-13T15:28:39.983348096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.172169785s" Feb 13 15:28:39.983403 containerd[1484]: time="2025-02-13T15:28:39.983398619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Feb 13 15:28:39.985151 containerd[1484]: time="2025-02-13T15:28:39.985084988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:28:40.002788 containerd[1484]: time="2025-02-13T15:28:40.002728681Z" level=info msg="CreateContainer within sandbox \"35cd7fefdd391ecd0b97edba46e3b93a5236861fc89f8d0c154ee2f41d1a5a5f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 15:28:40.024215 containerd[1484]: time="2025-02-13T15:28:40.024054860Z" level=info msg="CreateContainer within sandbox \"35cd7fefdd391ecd0b97edba46e3b93a5236861fc89f8d0c154ee2f41d1a5a5f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"711cc4c52790980fac8a43ce66cd09ae701ea504fe7f0cde120df3b8daf89edf\"" Feb 13 15:28:40.025160 containerd[1484]: time="2025-02-13T15:28:40.025110477Z" level=info msg="StartContainer for \"711cc4c52790980fac8a43ce66cd09ae701ea504fe7f0cde120df3b8daf89edf\"" Feb 13 15:28:40.071137 systemd[1]: Started cri-containerd-711cc4c52790980fac8a43ce66cd09ae701ea504fe7f0cde120df3b8daf89edf.scope - libcontainer container 711cc4c52790980fac8a43ce66cd09ae701ea504fe7f0cde120df3b8daf89edf. Feb 13 15:28:40.120323 containerd[1484]: time="2025-02-13T15:28:40.119704611Z" level=info msg="StartContainer for \"711cc4c52790980fac8a43ce66cd09ae701ea504fe7f0cde120df3b8daf89edf\" returns successfully" Feb 13 15:28:40.643811 kubelet[2829]: I0213 15:28:40.642865 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7cc7468c55-j2gm9" podStartSLOduration=1.468903847 podStartE2EDuration="3.642659714s" podCreationTimestamp="2025-02-13 15:28:37 +0000 UTC" firstStartedPulling="2025-02-13 15:28:37.810332588 +0000 UTC m=+23.463956099" lastFinishedPulling="2025-02-13 15:28:39.984088535 +0000 UTC m=+25.637711966" observedRunningTime="2025-02-13 15:28:40.641140313 +0000 UTC m=+26.294763744" watchObservedRunningTime="2025-02-13 15:28:40.642659714 +0000 UTC m=+26.296283145" Feb 13 15:28:40.653032 kubelet[2829]: E0213 15:28:40.652972 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.653032 kubelet[2829]: W0213 15:28:40.653005 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.653032 kubelet[2829]: E0213 15:28:40.653036 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.653481 kubelet[2829]: E0213 15:28:40.653384 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.653481 kubelet[2829]: W0213 15:28:40.653399 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.653481 kubelet[2829]: E0213 15:28:40.653420 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.653801 kubelet[2829]: E0213 15:28:40.653776 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.653801 kubelet[2829]: W0213 15:28:40.653799 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.653973 kubelet[2829]: E0213 15:28:40.653820 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.654082 kubelet[2829]: E0213 15:28:40.654060 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.654082 kubelet[2829]: W0213 15:28:40.654074 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.654209 kubelet[2829]: E0213 15:28:40.654098 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.654386 kubelet[2829]: E0213 15:28:40.654368 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.654386 kubelet[2829]: W0213 15:28:40.654381 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.654549 kubelet[2829]: E0213 15:28:40.654395 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.654676 kubelet[2829]: E0213 15:28:40.654601 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.654676 kubelet[2829]: W0213 15:28:40.654610 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.654676 kubelet[2829]: E0213 15:28:40.654623 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.654925 kubelet[2829]: E0213 15:28:40.654863 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.654925 kubelet[2829]: W0213 15:28:40.654873 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.655078 kubelet[2829]: E0213 15:28:40.654950 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.655260 kubelet[2829]: E0213 15:28:40.655208 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.655260 kubelet[2829]: W0213 15:28:40.655221 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.655260 kubelet[2829]: E0213 15:28:40.655257 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.655568 kubelet[2829]: E0213 15:28:40.655544 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.655568 kubelet[2829]: W0213 15:28:40.655558 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.655880 kubelet[2829]: E0213 15:28:40.655591 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.655880 kubelet[2829]: E0213 15:28:40.655859 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.655880 kubelet[2829]: W0213 15:28:40.655870 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.655880 kubelet[2829]: E0213 15:28:40.655883 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.656242 kubelet[2829]: E0213 15:28:40.656117 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.656242 kubelet[2829]: W0213 15:28:40.656126 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.656242 kubelet[2829]: E0213 15:28:40.656139 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.656622 kubelet[2829]: E0213 15:28:40.656396 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.656622 kubelet[2829]: W0213 15:28:40.656406 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.656622 kubelet[2829]: E0213 15:28:40.656420 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.656622 kubelet[2829]: E0213 15:28:40.656665 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.657169 kubelet[2829]: W0213 15:28:40.656682 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.657169 kubelet[2829]: E0213 15:28:40.656695 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.657373 kubelet[2829]: E0213 15:28:40.656986 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.657373 kubelet[2829]: W0213 15:28:40.657293 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.657373 kubelet[2829]: E0213 15:28:40.657311 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.657642 kubelet[2829]: E0213 15:28:40.657608 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.657642 kubelet[2829]: W0213 15:28:40.657627 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.657642 kubelet[2829]: E0213 15:28:40.657643 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.673910 kubelet[2829]: E0213 15:28:40.673690 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.673910 kubelet[2829]: W0213 15:28:40.673764 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.673910 kubelet[2829]: E0213 15:28:40.673790 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.674487 kubelet[2829]: E0213 15:28:40.674340 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.674487 kubelet[2829]: W0213 15:28:40.674355 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.674487 kubelet[2829]: E0213 15:28:40.674375 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.674907 kubelet[2829]: E0213 15:28:40.674891 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.675365 kubelet[2829]: W0213 15:28:40.675320 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.675571 kubelet[2829]: E0213 15:28:40.675435 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.675736 kubelet[2829]: E0213 15:28:40.675675 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.675736 kubelet[2829]: W0213 15:28:40.675692 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.675940 kubelet[2829]: E0213 15:28:40.675709 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.676779 kubelet[2829]: E0213 15:28:40.676754 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.676927 kubelet[2829]: W0213 15:28:40.676875 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.677214 kubelet[2829]: E0213 15:28:40.677048 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.677378 kubelet[2829]: E0213 15:28:40.677363 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.677452 kubelet[2829]: W0213 15:28:40.677439 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.677551 kubelet[2829]: E0213 15:28:40.677527 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.677882 kubelet[2829]: E0213 15:28:40.677780 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.677882 kubelet[2829]: W0213 15:28:40.677794 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.677882 kubelet[2829]: E0213 15:28:40.677818 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.678072 kubelet[2829]: E0213 15:28:40.678059 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.678126 kubelet[2829]: W0213 15:28:40.678116 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.678200 kubelet[2829]: E0213 15:28:40.678183 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.678637 kubelet[2829]: E0213 15:28:40.678487 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.678637 kubelet[2829]: W0213 15:28:40.678501 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.678637 kubelet[2829]: E0213 15:28:40.678526 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.678915 kubelet[2829]: E0213 15:28:40.678900 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.679294 kubelet[2829]: W0213 15:28:40.679059 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.679294 kubelet[2829]: E0213 15:28:40.679092 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.679419 kubelet[2829]: E0213 15:28:40.679380 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.679419 kubelet[2829]: W0213 15:28:40.679394 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.679419 kubelet[2829]: E0213 15:28:40.679419 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.679613 kubelet[2829]: E0213 15:28:40.679602 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.679613 kubelet[2829]: W0213 15:28:40.679613 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.679818 kubelet[2829]: E0213 15:28:40.679804 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.679872 kubelet[2829]: E0213 15:28:40.679860 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.679872 kubelet[2829]: W0213 15:28:40.679871 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.680043 kubelet[2829]: E0213 15:28:40.679982 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.680043 kubelet[2829]: E0213 15:28:40.680033 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.680043 kubelet[2829]: W0213 15:28:40.680042 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.680166 kubelet[2829]: E0213 15:28:40.680061 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.680281 kubelet[2829]: E0213 15:28:40.680256 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.680281 kubelet[2829]: W0213 15:28:40.680288 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.680369 kubelet[2829]: E0213 15:28:40.680309 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.680661 kubelet[2829]: E0213 15:28:40.680646 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.680661 kubelet[2829]: W0213 15:28:40.680661 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.680760 kubelet[2829]: E0213 15:28:40.680679 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.681225 kubelet[2829]: E0213 15:28:40.681088 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.681225 kubelet[2829]: W0213 15:28:40.681105 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.681225 kubelet[2829]: E0213 15:28:40.681128 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:40.681538 kubelet[2829]: E0213 15:28:40.681462 2829 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:28:40.681538 kubelet[2829]: W0213 15:28:40.681477 2829 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:28:40.681538 kubelet[2829]: E0213 15:28:40.681491 2829 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:28:41.460996 containerd[1484]: time="2025-02-13T15:28:41.460191023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:41.462441 containerd[1484]: time="2025-02-13T15:28:41.462380701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Feb 13 15:28:41.464171 containerd[1484]: time="2025-02-13T15:28:41.464114554Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:41.469019 containerd[1484]: time="2025-02-13T15:28:41.468886692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:41.470097 containerd[1484]: time="2025-02-13T15:28:41.469439482Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.484308052s" Feb 13 15:28:41.470097 containerd[1484]: time="2025-02-13T15:28:41.469477124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 15:28:41.472632 containerd[1484]: time="2025-02-13T15:28:41.472602253Z" level=info msg="CreateContainer within sandbox \"f8f7fec30bc61b38f5083bd7fa1388b2e4893e8fad107074d854aa55e4031287\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:28:41.486260 kubelet[2829]: E0213 15:28:41.484615 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngp8v" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" Feb 13 15:28:41.494933 containerd[1484]: time="2025-02-13T15:28:41.494861415Z" level=info msg="CreateContainer within sandbox \"f8f7fec30bc61b38f5083bd7fa1388b2e4893e8fad107074d854aa55e4031287\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"036aa9ce8a57326831496af9f047758cd96b5a2f3aefbf6d4358176a0f9eded6\"" Feb 13 15:28:41.498968 containerd[1484]: time="2025-02-13T15:28:41.495481289Z" level=info msg="StartContainer for \"036aa9ce8a57326831496af9f047758cd96b5a2f3aefbf6d4358176a0f9eded6\"" Feb 13 15:28:41.535482 systemd[1]: Started cri-containerd-036aa9ce8a57326831496af9f047758cd96b5a2f3aefbf6d4358176a0f9eded6.scope - libcontainer container 036aa9ce8a57326831496af9f047758cd96b5a2f3aefbf6d4358176a0f9eded6. Feb 13 15:28:41.574323 containerd[1484]: time="2025-02-13T15:28:41.574173099Z" level=info msg="StartContainer for \"036aa9ce8a57326831496af9f047758cd96b5a2f3aefbf6d4358176a0f9eded6\" returns successfully" Feb 13 15:28:41.588472 systemd[1]: cri-containerd-036aa9ce8a57326831496af9f047758cd96b5a2f3aefbf6d4358176a0f9eded6.scope: Deactivated successfully. Feb 13 15:28:41.632722 kubelet[2829]: I0213 15:28:41.632690 2829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:41.713397 containerd[1484]: time="2025-02-13T15:28:41.712519331Z" level=info msg="shim disconnected" id=036aa9ce8a57326831496af9f047758cd96b5a2f3aefbf6d4358176a0f9eded6 namespace=k8s.io Feb 13 15:28:41.713397 containerd[1484]: time="2025-02-13T15:28:41.712603776Z" level=warning msg="cleaning up after shim disconnected" id=036aa9ce8a57326831496af9f047758cd96b5a2f3aefbf6d4358176a0f9eded6 namespace=k8s.io Feb 13 15:28:41.713397 containerd[1484]: time="2025-02-13T15:28:41.712615497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:28:41.732166 containerd[1484]: time="2025-02-13T15:28:41.732086068Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:28:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:28:41.992139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-036aa9ce8a57326831496af9f047758cd96b5a2f3aefbf6d4358176a0f9eded6-rootfs.mount: Deactivated successfully. Feb 13 15:28:42.639251 containerd[1484]: time="2025-02-13T15:28:42.638848163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:28:43.484487 kubelet[2829]: E0213 15:28:43.483949 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngp8v" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" Feb 13 15:28:45.484682 kubelet[2829]: E0213 15:28:45.484585 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngp8v" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" Feb 13 15:28:45.593455 kubelet[2829]: I0213 15:28:45.592870 2829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:28:45.839183 containerd[1484]: time="2025-02-13T15:28:45.839006634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:45.841062 containerd[1484]: time="2025-02-13T15:28:45.840793855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 15:28:45.843342 containerd[1484]: time="2025-02-13T15:28:45.842376143Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:45.847524 containerd[1484]: time="2025-02-13T15:28:45.847465069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:45.848672 containerd[1484]: time="2025-02-13T15:28:45.848624614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.209724209s" Feb 13 15:28:45.848672 containerd[1484]: time="2025-02-13T15:28:45.848673257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 15:28:45.852977 containerd[1484]: time="2025-02-13T15:28:45.852911975Z" level=info msg="CreateContainer within sandbox \"f8f7fec30bc61b38f5083bd7fa1388b2e4893e8fad107074d854aa55e4031287\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:28:45.874671 containerd[1484]: time="2025-02-13T15:28:45.874547430Z" level=info msg="CreateContainer within sandbox \"f8f7fec30bc61b38f5083bd7fa1388b2e4893e8fad107074d854aa55e4031287\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"34f513aad363d9982810f086080354da7cb994f675b01380da0b0a7ab267d276\"" Feb 13 15:28:45.879032 containerd[1484]: time="2025-02-13T15:28:45.878383766Z" level=info msg="StartContainer for \"34f513aad363d9982810f086080354da7cb994f675b01380da0b0a7ab267d276\"" Feb 13 15:28:45.911471 systemd[1]: Started cri-containerd-34f513aad363d9982810f086080354da7cb994f675b01380da0b0a7ab267d276.scope - libcontainer container 34f513aad363d9982810f086080354da7cb994f675b01380da0b0a7ab267d276. Feb 13 15:28:45.950605 containerd[1484]: time="2025-02-13T15:28:45.950551218Z" level=info msg="StartContainer for \"34f513aad363d9982810f086080354da7cb994f675b01380da0b0a7ab267d276\" returns successfully" Feb 13 15:28:46.426449 containerd[1484]: time="2025-02-13T15:28:46.426379311Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:28:46.430603 systemd[1]: cri-containerd-34f513aad363d9982810f086080354da7cb994f675b01380da0b0a7ab267d276.scope: Deactivated successfully. Feb 13 15:28:46.443306 kubelet[2829]: I0213 15:28:46.442716 2829 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:28:46.467417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34f513aad363d9982810f086080354da7cb994f675b01380da0b0a7ab267d276-rootfs.mount: Deactivated successfully. Feb 13 15:28:46.480890 kubelet[2829]: I0213 15:28:46.480037 2829 topology_manager.go:215] "Topology Admit Handler" podUID="2492456e-62fc-4caa-b4e8-3b7a3936ed4e" podNamespace="kube-system" podName="coredns-76f75df574-56fvw" Feb 13 15:28:46.494771 kubelet[2829]: I0213 15:28:46.493706 2829 topology_manager.go:215] "Topology Admit Handler" podUID="0f89f8e5-fc39-4122-94aa-c93e88296236" podNamespace="kube-system" podName="coredns-76f75df574-vfsrv" Feb 13 15:28:46.494771 kubelet[2829]: I0213 15:28:46.493900 2829 topology_manager.go:215] "Topology Admit Handler" podUID="0f9f8988-12d5-4588-beb7-07ec325b3215" podNamespace="calico-system" podName="calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:46.494771 kubelet[2829]: I0213 15:28:46.494191 2829 topology_manager.go:215] "Topology Admit Handler" podUID="5df25a74-af5d-4f05-b3c4-95a12fc65600" podNamespace="calico-apiserver" podName="calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:46.494771 kubelet[2829]: I0213 15:28:46.494309 2829 topology_manager.go:215] "Topology Admit Handler" podUID="7c98ace8-be26-494a-8fb2-3660c969f424" podNamespace="calico-apiserver" podName="calico-apiserver-96f496cb-flt7v" Feb 13 15:28:46.498109 systemd[1]: Created slice kubepods-burstable-pod2492456e_62fc_4caa_b4e8_3b7a3936ed4e.slice - libcontainer container kubepods-burstable-pod2492456e_62fc_4caa_b4e8_3b7a3936ed4e.slice. Feb 13 15:28:46.515115 systemd[1]: Created slice kubepods-besteffort-pod7c98ace8_be26_494a_8fb2_3660c969f424.slice - libcontainer container kubepods-besteffort-pod7c98ace8_be26_494a_8fb2_3660c969f424.slice. Feb 13 15:28:46.519565 kubelet[2829]: I0213 15:28:46.518185 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9f8988-12d5-4588-beb7-07ec325b3215-tigera-ca-bundle\") pod \"calico-kube-controllers-5dfcc87966-z4lb7\" (UID: \"0f9f8988-12d5-4588-beb7-07ec325b3215\") " pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:46.519565 kubelet[2829]: I0213 15:28:46.518239 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5df25a74-af5d-4f05-b3c4-95a12fc65600-calico-apiserver-certs\") pod \"calico-apiserver-96f496cb-jtsr2\" (UID: \"5df25a74-af5d-4f05-b3c4-95a12fc65600\") " pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:46.520503 kubelet[2829]: I0213 15:28:46.520482 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0f89f8e5-fc39-4122-94aa-c93e88296236-config-volume\") pod \"coredns-76f75df574-vfsrv\" (UID: \"0f89f8e5-fc39-4122-94aa-c93e88296236\") " pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:46.523795 kubelet[2829]: I0213 15:28:46.521673 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq9mq\" (UniqueName: \"kubernetes.io/projected/2492456e-62fc-4caa-b4e8-3b7a3936ed4e-kube-api-access-xq9mq\") pod \"coredns-76f75df574-56fvw\" (UID: \"2492456e-62fc-4caa-b4e8-3b7a3936ed4e\") " pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:46.523795 kubelet[2829]: I0213 15:28:46.521726 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph7bj\" (UniqueName: \"kubernetes.io/projected/5df25a74-af5d-4f05-b3c4-95a12fc65600-kube-api-access-ph7bj\") pod \"calico-apiserver-96f496cb-jtsr2\" (UID: \"5df25a74-af5d-4f05-b3c4-95a12fc65600\") " pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:46.523795 kubelet[2829]: I0213 15:28:46.521763 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2492456e-62fc-4caa-b4e8-3b7a3936ed4e-config-volume\") pod \"coredns-76f75df574-56fvw\" (UID: \"2492456e-62fc-4caa-b4e8-3b7a3936ed4e\") " pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:46.523795 kubelet[2829]: I0213 15:28:46.521788 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmw6w\" (UniqueName: \"kubernetes.io/projected/0f9f8988-12d5-4588-beb7-07ec325b3215-kube-api-access-gmw6w\") pod \"calico-kube-controllers-5dfcc87966-z4lb7\" (UID: \"0f9f8988-12d5-4588-beb7-07ec325b3215\") " pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:46.523795 kubelet[2829]: I0213 15:28:46.521810 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgf5c\" (UniqueName: \"kubernetes.io/projected/7c98ace8-be26-494a-8fb2-3660c969f424-kube-api-access-xgf5c\") pod \"calico-apiserver-96f496cb-flt7v\" (UID: \"7c98ace8-be26-494a-8fb2-3660c969f424\") " pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:46.524045 kubelet[2829]: I0213 15:28:46.521857 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c98ace8-be26-494a-8fb2-3660c969f424-calico-apiserver-certs\") pod \"calico-apiserver-96f496cb-flt7v\" (UID: \"7c98ace8-be26-494a-8fb2-3660c969f424\") " pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:46.524045 kubelet[2829]: I0213 15:28:46.521882 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4nt6\" (UniqueName: \"kubernetes.io/projected/0f89f8e5-fc39-4122-94aa-c93e88296236-kube-api-access-p4nt6\") pod \"coredns-76f75df574-vfsrv\" (UID: \"0f89f8e5-fc39-4122-94aa-c93e88296236\") " pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:46.538916 systemd[1]: Created slice kubepods-burstable-pod0f89f8e5_fc39_4122_94aa_c93e88296236.slice - libcontainer container kubepods-burstable-pod0f89f8e5_fc39_4122_94aa_c93e88296236.slice. Feb 13 15:28:46.548606 systemd[1]: Created slice kubepods-besteffort-pod5df25a74_af5d_4f05_b3c4_95a12fc65600.slice - libcontainer container kubepods-besteffort-pod5df25a74_af5d_4f05_b3c4_95a12fc65600.slice. Feb 13 15:28:46.563570 systemd[1]: Created slice kubepods-besteffort-pod0f9f8988_12d5_4588_beb7_07ec325b3215.slice - libcontainer container kubepods-besteffort-pod0f9f8988_12d5_4588_beb7_07ec325b3215.slice. Feb 13 15:28:46.582427 containerd[1484]: time="2025-02-13T15:28:46.582337106Z" level=info msg="shim disconnected" id=34f513aad363d9982810f086080354da7cb994f675b01380da0b0a7ab267d276 namespace=k8s.io Feb 13 15:28:46.582592 containerd[1484]: time="2025-02-13T15:28:46.582477474Z" level=warning msg="cleaning up after shim disconnected" id=34f513aad363d9982810f086080354da7cb994f675b01380da0b0a7ab267d276 namespace=k8s.io Feb 13 15:28:46.582592 containerd[1484]: time="2025-02-13T15:28:46.582492275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:28:46.657960 containerd[1484]: time="2025-02-13T15:28:46.657705336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:28:46.823042 containerd[1484]: time="2025-02-13T15:28:46.822899255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:46.829739 containerd[1484]: time="2025-02-13T15:28:46.829462467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:28:46.851910 containerd[1484]: time="2025-02-13T15:28:46.851598481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:0,}" Feb 13 15:28:46.857538 containerd[1484]: time="2025-02-13T15:28:46.857305484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:0,}" Feb 13 15:28:46.878313 containerd[1484]: time="2025-02-13T15:28:46.875665604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:0,}" Feb 13 15:28:46.994492 containerd[1484]: time="2025-02-13T15:28:46.994434613Z" level=error msg="Failed to destroy network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:46.994898 containerd[1484]: time="2025-02-13T15:28:46.994859997Z" level=error msg="encountered an error cleaning up failed sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:46.994957 containerd[1484]: time="2025-02-13T15:28:46.994931121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:46.995355 kubelet[2829]: E0213 15:28:46.995197 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:46.995355 kubelet[2829]: E0213 15:28:46.995260 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:46.996533 kubelet[2829]: E0213 15:28:46.996143 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:46.996533 kubelet[2829]: E0213 15:28:46.996238 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" podUID="7c98ace8-be26-494a-8fb2-3660c969f424" Feb 13 15:28:47.012034 containerd[1484]: time="2025-02-13T15:28:47.011967572Z" level=error msg="Failed to destroy network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.016480 containerd[1484]: time="2025-02-13T15:28:47.015615220Z" level=error msg="encountered an error cleaning up failed sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.016480 containerd[1484]: time="2025-02-13T15:28:47.015714226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.016640 kubelet[2829]: E0213 15:28:47.016088 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.016640 kubelet[2829]: E0213 15:28:47.016141 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:47.016640 kubelet[2829]: E0213 15:28:47.016161 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:47.016737 kubelet[2829]: E0213 15:28:47.016210 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-56fvw" podUID="2492456e-62fc-4caa-b4e8-3b7a3936ed4e" Feb 13 15:28:47.062435 containerd[1484]: time="2025-02-13T15:28:47.062240484Z" level=error msg="Failed to destroy network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.063698 containerd[1484]: time="2025-02-13T15:28:47.062798636Z" level=error msg="encountered an error cleaning up failed sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.063698 containerd[1484]: time="2025-02-13T15:28:47.062858799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.063877 kubelet[2829]: E0213 15:28:47.063149 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.063877 kubelet[2829]: E0213 15:28:47.063197 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:47.063877 kubelet[2829]: E0213 15:28:47.063220 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:47.063979 kubelet[2829]: E0213 15:28:47.063285 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" podUID="5df25a74-af5d-4f05-b3c4-95a12fc65600" Feb 13 15:28:47.076413 containerd[1484]: time="2025-02-13T15:28:47.076042352Z" level=error msg="Failed to destroy network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.077020 containerd[1484]: time="2025-02-13T15:28:47.076791275Z" level=error msg="encountered an error cleaning up failed sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.077428 containerd[1484]: time="2025-02-13T15:28:47.077153176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.078140 kubelet[2829]: E0213 15:28:47.077700 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.078140 kubelet[2829]: E0213 15:28:47.077753 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:47.078140 kubelet[2829]: E0213 15:28:47.077774 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:47.078293 kubelet[2829]: E0213 15:28:47.077827 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vfsrv" podUID="0f89f8e5-fc39-4122-94aa-c93e88296236" Feb 13 15:28:47.101728 containerd[1484]: time="2025-02-13T15:28:47.101547690Z" level=error msg="Failed to destroy network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.102637 containerd[1484]: time="2025-02-13T15:28:47.102340855Z" level=error msg="encountered an error cleaning up failed sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.102637 containerd[1484]: time="2025-02-13T15:28:47.102438820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.103458 kubelet[2829]: E0213 15:28:47.103158 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.103458 kubelet[2829]: E0213 15:28:47.103261 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:47.103458 kubelet[2829]: E0213 15:28:47.103319 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:47.103594 kubelet[2829]: E0213 15:28:47.103388 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" podUID="0f9f8988-12d5-4588-beb7-07ec325b3215" Feb 13 15:28:47.491724 systemd[1]: Created slice kubepods-besteffort-podcc578b4f_a600_4134_9ea7_e3c0400423a8.slice - libcontainer container kubepods-besteffort-podcc578b4f_a600_4134_9ea7_e3c0400423a8.slice. Feb 13 15:28:47.494894 containerd[1484]: time="2025-02-13T15:28:47.494848160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:0,}" Feb 13 15:28:47.556286 containerd[1484]: time="2025-02-13T15:28:47.555993054Z" level=error msg="Failed to destroy network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.556763 containerd[1484]: time="2025-02-13T15:28:47.556539525Z" level=error msg="encountered an error cleaning up failed sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.556763 containerd[1484]: time="2025-02-13T15:28:47.556625690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.557163 kubelet[2829]: E0213 15:28:47.557027 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.557163 kubelet[2829]: E0213 15:28:47.557077 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:47.557163 kubelet[2829]: E0213 15:28:47.557102 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:47.558659 kubelet[2829]: E0213 15:28:47.557151 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ngp8v" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" Feb 13 15:28:47.659100 kubelet[2829]: I0213 15:28:47.659052 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe" Feb 13 15:28:47.660435 containerd[1484]: time="2025-02-13T15:28:47.659945873Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\"" Feb 13 15:28:47.660435 containerd[1484]: time="2025-02-13T15:28:47.660211688Z" level=info msg="Ensure that sandbox 2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe in task-service has been cleanup successfully" Feb 13 15:28:47.660698 containerd[1484]: time="2025-02-13T15:28:47.660637712Z" level=info msg="TearDown network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" successfully" Feb 13 15:28:47.660785 containerd[1484]: time="2025-02-13T15:28:47.660769720Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" returns successfully" Feb 13 15:28:47.661725 containerd[1484]: time="2025-02-13T15:28:47.661684412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:1,}" Feb 13 15:28:47.662615 kubelet[2829]: I0213 15:28:47.662571 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec" Feb 13 15:28:47.663886 containerd[1484]: time="2025-02-13T15:28:47.663181698Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\"" Feb 13 15:28:47.663886 containerd[1484]: time="2025-02-13T15:28:47.663394990Z" level=info msg="Ensure that sandbox bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec in task-service has been cleanup successfully" Feb 13 15:28:47.664573 containerd[1484]: time="2025-02-13T15:28:47.664483812Z" level=info msg="TearDown network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" successfully" Feb 13 15:28:47.664573 containerd[1484]: time="2025-02-13T15:28:47.664507693Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" returns successfully" Feb 13 15:28:47.665397 containerd[1484]: time="2025-02-13T15:28:47.665242575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:1,}" Feb 13 15:28:47.667090 kubelet[2829]: I0213 15:28:47.666980 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451" Feb 13 15:28:47.668463 containerd[1484]: time="2025-02-13T15:28:47.668036175Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\"" Feb 13 15:28:47.668463 containerd[1484]: time="2025-02-13T15:28:47.668247347Z" level=info msg="Ensure that sandbox df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451 in task-service has been cleanup successfully" Feb 13 15:28:47.668856 containerd[1484]: time="2025-02-13T15:28:47.668793898Z" level=info msg="TearDown network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" successfully" Feb 13 15:28:47.668856 containerd[1484]: time="2025-02-13T15:28:47.668819940Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" returns successfully" Feb 13 15:28:47.672199 containerd[1484]: time="2025-02-13T15:28:47.671999121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:1,}" Feb 13 15:28:47.673232 kubelet[2829]: I0213 15:28:47.673147 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696" Feb 13 15:28:47.675579 containerd[1484]: time="2025-02-13T15:28:47.675282309Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\"" Feb 13 15:28:47.675579 containerd[1484]: time="2025-02-13T15:28:47.675447158Z" level=info msg="Ensure that sandbox 114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696 in task-service has been cleanup successfully" Feb 13 15:28:47.677730 containerd[1484]: time="2025-02-13T15:28:47.676445815Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\"" Feb 13 15:28:47.677730 containerd[1484]: time="2025-02-13T15:28:47.676592504Z" level=info msg="TearDown network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" successfully" Feb 13 15:28:47.677730 containerd[1484]: time="2025-02-13T15:28:47.676610865Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" returns successfully" Feb 13 15:28:47.677730 containerd[1484]: time="2025-02-13T15:28:47.676754993Z" level=info msg="Ensure that sandbox 0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3 in task-service has been cleanup successfully" Feb 13 15:28:47.677846 kubelet[2829]: I0213 15:28:47.675772 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3" Feb 13 15:28:47.678128 containerd[1484]: time="2025-02-13T15:28:47.678064668Z" level=info msg="TearDown network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" successfully" Feb 13 15:28:47.678245 containerd[1484]: time="2025-02-13T15:28:47.678130712Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" returns successfully" Feb 13 15:28:47.680303 containerd[1484]: time="2025-02-13T15:28:47.679359782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:1,}" Feb 13 15:28:47.680789 containerd[1484]: time="2025-02-13T15:28:47.680753542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:28:47.681224 kubelet[2829]: I0213 15:28:47.681200 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa" Feb 13 15:28:47.681890 containerd[1484]: time="2025-02-13T15:28:47.681844404Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\"" Feb 13 15:28:47.682755 containerd[1484]: time="2025-02-13T15:28:47.682488681Z" level=info msg="Ensure that sandbox 9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa in task-service has been cleanup successfully" Feb 13 15:28:47.683347 containerd[1484]: time="2025-02-13T15:28:47.683315488Z" level=info msg="TearDown network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" successfully" Feb 13 15:28:47.683576 containerd[1484]: time="2025-02-13T15:28:47.683555702Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" returns successfully" Feb 13 15:28:47.686172 containerd[1484]: time="2025-02-13T15:28:47.686009282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:1,}" Feb 13 15:28:47.867994 containerd[1484]: time="2025-02-13T15:28:47.867878393Z" level=error msg="Failed to destroy network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.879336 containerd[1484]: time="2025-02-13T15:28:47.877336453Z" level=error msg="encountered an error cleaning up failed sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.879336 containerd[1484]: time="2025-02-13T15:28:47.877421698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.880697 kubelet[2829]: E0213 15:28:47.879635 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.880697 kubelet[2829]: E0213 15:28:47.879710 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:47.880697 kubelet[2829]: E0213 15:28:47.879731 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:47.880864 kubelet[2829]: E0213 15:28:47.879793 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vfsrv" podUID="0f89f8e5-fc39-4122-94aa-c93e88296236" Feb 13 15:28:47.881772 systemd[1]: run-netns-cni\x2db7fc6335\x2d4e74\x2dbdab\x2dfadb\x2d1bb4418df513.mount: Deactivated successfully. Feb 13 15:28:47.881898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451-shm.mount: Deactivated successfully. Feb 13 15:28:47.881957 systemd[1]: run-netns-cni\x2d7feb28a2\x2da84d\x2dbe93\x2da48b\x2d58056e8df57b.mount: Deactivated successfully. Feb 13 15:28:47.882018 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3-shm.mount: Deactivated successfully. Feb 13 15:28:47.882141 systemd[1]: run-netns-cni\x2d43d970c2\x2d179b\x2d1222\x2deccc\x2d4cafb781d5d3.mount: Deactivated successfully. Feb 13 15:28:47.882192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa-shm.mount: Deactivated successfully. Feb 13 15:28:47.882250 systemd[1]: run-netns-cni\x2da7448132\x2de005\x2d380d\x2d7b96\x2d7317acae2ea5.mount: Deactivated successfully. Feb 13 15:28:47.882802 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec-shm.mount: Deactivated successfully. Feb 13 15:28:47.890992 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8-shm.mount: Deactivated successfully. Feb 13 15:28:47.897928 containerd[1484]: time="2025-02-13T15:28:47.897776461Z" level=error msg="Failed to destroy network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.899003 containerd[1484]: time="2025-02-13T15:28:47.898367455Z" level=error msg="encountered an error cleaning up failed sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.899003 containerd[1484]: time="2025-02-13T15:28:47.898686673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.902624 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb-shm.mount: Deactivated successfully. Feb 13 15:28:47.904873 kubelet[2829]: E0213 15:28:47.903816 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.904873 kubelet[2829]: E0213 15:28:47.903870 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:47.904873 kubelet[2829]: E0213 15:28:47.903895 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:47.905063 kubelet[2829]: E0213 15:28:47.903951 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" podUID="0f9f8988-12d5-4588-beb7-07ec325b3215" Feb 13 15:28:47.925326 containerd[1484]: time="2025-02-13T15:28:47.924809165Z" level=error msg="Failed to destroy network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.925326 containerd[1484]: time="2025-02-13T15:28:47.925186347Z" level=error msg="encountered an error cleaning up failed sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.925326 containerd[1484]: time="2025-02-13T15:28:47.925240630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.926100 kubelet[2829]: E0213 15:28:47.925666 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.926100 kubelet[2829]: E0213 15:28:47.925723 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:47.926100 kubelet[2829]: E0213 15:28:47.925743 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:47.926298 kubelet[2829]: E0213 15:28:47.925809 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-56fvw" podUID="2492456e-62fc-4caa-b4e8-3b7a3936ed4e" Feb 13 15:28:47.929518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871-shm.mount: Deactivated successfully. Feb 13 15:28:47.945698 containerd[1484]: time="2025-02-13T15:28:47.945560871Z" level=error msg="Failed to destroy network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.947310 containerd[1484]: time="2025-02-13T15:28:47.946340875Z" level=error msg="encountered an error cleaning up failed sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.947310 containerd[1484]: time="2025-02-13T15:28:47.946412280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.949084 kubelet[2829]: E0213 15:28:47.948490 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.949084 kubelet[2829]: E0213 15:28:47.948560 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:47.949084 kubelet[2829]: E0213 15:28:47.948582 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:47.949238 kubelet[2829]: E0213 15:28:47.948653 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" podUID="5df25a74-af5d-4f05-b3c4-95a12fc65600" Feb 13 15:28:47.949680 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633-shm.mount: Deactivated successfully. Feb 13 15:28:47.954748 containerd[1484]: time="2025-02-13T15:28:47.954583786Z" level=error msg="Failed to destroy network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.955686 containerd[1484]: time="2025-02-13T15:28:47.955651887Z" level=error msg="encountered an error cleaning up failed sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.956580 containerd[1484]: time="2025-02-13T15:28:47.956378049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.956726 kubelet[2829]: E0213 15:28:47.956629 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.956726 kubelet[2829]: E0213 15:28:47.956684 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:47.956726 kubelet[2829]: E0213 15:28:47.956707 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:47.957178 kubelet[2829]: E0213 15:28:47.956766 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" podUID="7c98ace8-be26-494a-8fb2-3660c969f424" Feb 13 15:28:47.962552 containerd[1484]: time="2025-02-13T15:28:47.962505159Z" level=error msg="Failed to destroy network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.962880 containerd[1484]: time="2025-02-13T15:28:47.962844458Z" level=error msg="encountered an error cleaning up failed sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.962947 containerd[1484]: time="2025-02-13T15:28:47.962919183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.963304 kubelet[2829]: E0213 15:28:47.963178 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:47.963304 kubelet[2829]: E0213 15:28:47.963230 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:47.963304 kubelet[2829]: E0213 15:28:47.963256 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:47.963567 kubelet[2829]: E0213 15:28:47.963537 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ngp8v" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" Feb 13 15:28:48.686071 kubelet[2829]: I0213 15:28:48.685851 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8" Feb 13 15:28:48.689144 containerd[1484]: time="2025-02-13T15:28:48.687651948Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\"" Feb 13 15:28:48.689144 containerd[1484]: time="2025-02-13T15:28:48.688469755Z" level=info msg="Ensure that sandbox 1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8 in task-service has been cleanup successfully" Feb 13 15:28:48.689144 containerd[1484]: time="2025-02-13T15:28:48.688927141Z" level=info msg="TearDown network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" successfully" Feb 13 15:28:48.689144 containerd[1484]: time="2025-02-13T15:28:48.688946222Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" returns successfully" Feb 13 15:28:48.690263 containerd[1484]: time="2025-02-13T15:28:48.689839634Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\"" Feb 13 15:28:48.690353 kubelet[2829]: I0213 15:28:48.690320 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4" Feb 13 15:28:48.691544 containerd[1484]: time="2025-02-13T15:28:48.691435126Z" level=info msg="TearDown network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" successfully" Feb 13 15:28:48.691544 containerd[1484]: time="2025-02-13T15:28:48.691469928Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" returns successfully" Feb 13 15:28:48.691631 containerd[1484]: time="2025-02-13T15:28:48.691615936Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\"" Feb 13 15:28:48.692444 containerd[1484]: time="2025-02-13T15:28:48.691755504Z" level=info msg="Ensure that sandbox 0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4 in task-service has been cleanup successfully" Feb 13 15:28:48.693422 containerd[1484]: time="2025-02-13T15:28:48.693360277Z" level=info msg="TearDown network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" successfully" Feb 13 15:28:48.693632 containerd[1484]: time="2025-02-13T15:28:48.693612971Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" returns successfully" Feb 13 15:28:48.693738 containerd[1484]: time="2025-02-13T15:28:48.693605091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:2,}" Feb 13 15:28:48.694250 containerd[1484]: time="2025-02-13T15:28:48.694227767Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\"" Feb 13 15:28:48.694744 containerd[1484]: time="2025-02-13T15:28:48.694665552Z" level=info msg="TearDown network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" successfully" Feb 13 15:28:48.694897 containerd[1484]: time="2025-02-13T15:28:48.694795999Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" returns successfully" Feb 13 15:28:48.695263 kubelet[2829]: I0213 15:28:48.695015 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb" Feb 13 15:28:48.695738 containerd[1484]: time="2025-02-13T15:28:48.695659129Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\"" Feb 13 15:28:48.698966 containerd[1484]: time="2025-02-13T15:28:48.697071210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:28:48.698966 containerd[1484]: time="2025-02-13T15:28:48.697245860Z" level=info msg="Ensure that sandbox 2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb in task-service has been cleanup successfully" Feb 13 15:28:48.698966 containerd[1484]: time="2025-02-13T15:28:48.698295761Z" level=info msg="TearDown network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" successfully" Feb 13 15:28:48.698966 containerd[1484]: time="2025-02-13T15:28:48.698327403Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" returns successfully" Feb 13 15:28:48.704194 containerd[1484]: time="2025-02-13T15:28:48.702341634Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\"" Feb 13 15:28:48.704194 containerd[1484]: time="2025-02-13T15:28:48.702466441Z" level=info msg="TearDown network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" successfully" Feb 13 15:28:48.704194 containerd[1484]: time="2025-02-13T15:28:48.702489323Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" returns successfully" Feb 13 15:28:48.711531 containerd[1484]: time="2025-02-13T15:28:48.710708516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:2,}" Feb 13 15:28:48.716913 kubelet[2829]: I0213 15:28:48.716850 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871" Feb 13 15:28:48.719219 containerd[1484]: time="2025-02-13T15:28:48.719170523Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\"" Feb 13 15:28:48.722914 containerd[1484]: time="2025-02-13T15:28:48.722835534Z" level=info msg="Ensure that sandbox a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871 in task-service has been cleanup successfully" Feb 13 15:28:48.725917 containerd[1484]: time="2025-02-13T15:28:48.725861869Z" level=info msg="TearDown network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" successfully" Feb 13 15:28:48.725917 containerd[1484]: time="2025-02-13T15:28:48.725902511Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" returns successfully" Feb 13 15:28:48.728058 containerd[1484]: time="2025-02-13T15:28:48.728019313Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\"" Feb 13 15:28:48.728347 containerd[1484]: time="2025-02-13T15:28:48.728312050Z" level=info msg="TearDown network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" successfully" Feb 13 15:28:48.728347 containerd[1484]: time="2025-02-13T15:28:48.728334731Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" returns successfully" Feb 13 15:28:48.729119 kubelet[2829]: I0213 15:28:48.728846 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c" Feb 13 15:28:48.733996 containerd[1484]: time="2025-02-13T15:28:48.733952735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:2,}" Feb 13 15:28:48.734521 containerd[1484]: time="2025-02-13T15:28:48.734221190Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\"" Feb 13 15:28:48.734521 containerd[1484]: time="2025-02-13T15:28:48.734440643Z" level=info msg="Ensure that sandbox 41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c in task-service has been cleanup successfully" Feb 13 15:28:48.734656 containerd[1484]: time="2025-02-13T15:28:48.734625533Z" level=info msg="TearDown network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" successfully" Feb 13 15:28:48.734656 containerd[1484]: time="2025-02-13T15:28:48.734640374Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" returns successfully" Feb 13 15:28:48.735754 containerd[1484]: time="2025-02-13T15:28:48.735502504Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\"" Feb 13 15:28:48.735754 containerd[1484]: time="2025-02-13T15:28:48.735602350Z" level=info msg="TearDown network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" successfully" Feb 13 15:28:48.735754 containerd[1484]: time="2025-02-13T15:28:48.735612830Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" returns successfully" Feb 13 15:28:48.737333 containerd[1484]: time="2025-02-13T15:28:48.737302848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:2,}" Feb 13 15:28:48.741754 kubelet[2829]: I0213 15:28:48.741521 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633" Feb 13 15:28:48.742997 containerd[1484]: time="2025-02-13T15:28:48.742899370Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\"" Feb 13 15:28:48.743678 containerd[1484]: time="2025-02-13T15:28:48.743313594Z" level=info msg="Ensure that sandbox 33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633 in task-service has been cleanup successfully" Feb 13 15:28:48.743678 containerd[1484]: time="2025-02-13T15:28:48.743575809Z" level=info msg="TearDown network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" successfully" Feb 13 15:28:48.743678 containerd[1484]: time="2025-02-13T15:28:48.743590970Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" returns successfully" Feb 13 15:28:48.744897 containerd[1484]: time="2025-02-13T15:28:48.744195045Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\"" Feb 13 15:28:48.745149 containerd[1484]: time="2025-02-13T15:28:48.745124818Z" level=info msg="TearDown network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" successfully" Feb 13 15:28:48.745249 containerd[1484]: time="2025-02-13T15:28:48.745235265Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" returns successfully" Feb 13 15:28:48.746630 containerd[1484]: time="2025-02-13T15:28:48.746582142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:2,}" Feb 13 15:28:48.882818 systemd[1]: run-netns-cni\x2d7acb055c\x2d2e12\x2d2cac\x2df3b2\x2d60820114fb6c.mount: Deactivated successfully. Feb 13 15:28:48.882911 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4-shm.mount: Deactivated successfully. Feb 13 15:28:48.882962 systemd[1]: run-netns-cni\x2d583b97d2\x2d7490\x2df855\x2d3be8\x2d09f4371bc56a.mount: Deactivated successfully. Feb 13 15:28:48.883416 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c-shm.mount: Deactivated successfully. Feb 13 15:28:48.883514 systemd[1]: run-netns-cni\x2d33c020f0\x2d8de8\x2dea44\x2d8a5d\x2d70be762b0164.mount: Deactivated successfully. Feb 13 15:28:48.883563 systemd[1]: run-netns-cni\x2dbe401c10\x2da343\x2dcb52\x2d9bb5\x2d3770ed95b563.mount: Deactivated successfully. Feb 13 15:28:48.883604 systemd[1]: run-netns-cni\x2d28fcbb40\x2db90c\x2dfd42\x2da321\x2d339e15953ca4.mount: Deactivated successfully. Feb 13 15:28:48.883648 systemd[1]: run-netns-cni\x2d43f99d44\x2d20dc\x2dceae\x2dc5a3\x2d4d9136599a2c.mount: Deactivated successfully. Feb 13 15:28:48.947389 containerd[1484]: time="2025-02-13T15:28:48.945390713Z" level=error msg="Failed to destroy network for sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:48.949683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e-shm.mount: Deactivated successfully. Feb 13 15:28:48.951978 containerd[1484]: time="2025-02-13T15:28:48.951903568Z" level=error msg="encountered an error cleaning up failed sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:48.953217 containerd[1484]: time="2025-02-13T15:28:48.953136999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:48.957226 kubelet[2829]: E0213 15:28:48.956128 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:48.957226 kubelet[2829]: E0213 15:28:48.956203 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:48.957226 kubelet[2829]: E0213 15:28:48.956226 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:48.957500 kubelet[2829]: E0213 15:28:48.956302 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vfsrv" podUID="0f89f8e5-fc39-4122-94aa-c93e88296236" Feb 13 15:28:49.023441 containerd[1484]: time="2025-02-13T15:28:49.022990313Z" level=error msg="Failed to destroy network for sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.025650 containerd[1484]: time="2025-02-13T15:28:49.025605625Z" level=error msg="encountered an error cleaning up failed sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.026493 containerd[1484]: time="2025-02-13T15:28:49.025791756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.028009 kubelet[2829]: E0213 15:28:49.027442 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.028009 kubelet[2829]: E0213 15:28:49.027501 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:49.028009 kubelet[2829]: E0213 15:28:49.027523 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:49.027708 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343-shm.mount: Deactivated successfully. Feb 13 15:28:49.028385 kubelet[2829]: E0213 15:28:49.027569 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" podUID="7c98ace8-be26-494a-8fb2-3660c969f424" Feb 13 15:28:49.039312 containerd[1484]: time="2025-02-13T15:28:49.037617402Z" level=error msg="Failed to destroy network for sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.043212 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69-shm.mount: Deactivated successfully. Feb 13 15:28:49.044173 containerd[1484]: time="2025-02-13T15:28:49.044129660Z" level=error msg="Failed to destroy network for sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.045582 containerd[1484]: time="2025-02-13T15:28:49.045547742Z" level=error msg="encountered an error cleaning up failed sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.045726 containerd[1484]: time="2025-02-13T15:28:49.045705872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.047184 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690-shm.mount: Deactivated successfully. Feb 13 15:28:49.047402 containerd[1484]: time="2025-02-13T15:28:49.047357047Z" level=error msg="encountered an error cleaning up failed sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.047453 containerd[1484]: time="2025-02-13T15:28:49.047430532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.047642 kubelet[2829]: E0213 15:28:49.047620 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.047775 kubelet[2829]: E0213 15:28:49.047764 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:49.047870 kubelet[2829]: E0213 15:28:49.047855 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:49.048013 kubelet[2829]: E0213 15:28:49.048001 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-56fvw" podUID="2492456e-62fc-4caa-b4e8-3b7a3936ed4e" Feb 13 15:28:49.049578 kubelet[2829]: E0213 15:28:49.049539 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.049680 kubelet[2829]: E0213 15:28:49.049597 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:49.049680 kubelet[2829]: E0213 15:28:49.049616 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:49.049788 kubelet[2829]: E0213 15:28:49.049677 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" podUID="0f9f8988-12d5-4588-beb7-07ec325b3215" Feb 13 15:28:49.051975 containerd[1484]: time="2025-02-13T15:28:49.051931433Z" level=error msg="Failed to destroy network for sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.053447 containerd[1484]: time="2025-02-13T15:28:49.053403718Z" level=error msg="encountered an error cleaning up failed sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.053599 containerd[1484]: time="2025-02-13T15:28:49.053481603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.054762 kubelet[2829]: E0213 15:28:49.054588 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.054955 kubelet[2829]: E0213 15:28:49.054833 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:49.055287 kubelet[2829]: E0213 15:28:49.055040 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:49.055287 kubelet[2829]: E0213 15:28:49.055250 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ngp8v" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" Feb 13 15:28:49.065541 containerd[1484]: time="2025-02-13T15:28:49.065396175Z" level=error msg="Failed to destroy network for sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.066611 containerd[1484]: time="2025-02-13T15:28:49.066520960Z" level=error msg="encountered an error cleaning up failed sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.066611 containerd[1484]: time="2025-02-13T15:28:49.066595364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.067578 kubelet[2829]: E0213 15:28:49.066819 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:49.067578 kubelet[2829]: E0213 15:28:49.066873 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:49.067578 kubelet[2829]: E0213 15:28:49.066908 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:49.067674 kubelet[2829]: E0213 15:28:49.066958 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" podUID="5df25a74-af5d-4f05-b3c4-95a12fc65600" Feb 13 15:28:49.748095 kubelet[2829]: I0213 15:28:49.748057 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690" Feb 13 15:28:49.750935 containerd[1484]: time="2025-02-13T15:28:49.750888486Z" level=info msg="StopPodSandbox for \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\"" Feb 13 15:28:49.751542 containerd[1484]: time="2025-02-13T15:28:49.751490321Z" level=info msg="Ensure that sandbox f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690 in task-service has been cleanup successfully" Feb 13 15:28:49.753895 containerd[1484]: time="2025-02-13T15:28:49.753807576Z" level=info msg="TearDown network for sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" successfully" Feb 13 15:28:49.753895 containerd[1484]: time="2025-02-13T15:28:49.753847018Z" level=info msg="StopPodSandbox for \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" returns successfully" Feb 13 15:28:49.754481 containerd[1484]: time="2025-02-13T15:28:49.754322686Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\"" Feb 13 15:28:49.754664 containerd[1484]: time="2025-02-13T15:28:49.754601062Z" level=info msg="TearDown network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" successfully" Feb 13 15:28:49.754664 containerd[1484]: time="2025-02-13T15:28:49.754633384Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" returns successfully" Feb 13 15:28:49.755292 kubelet[2829]: I0213 15:28:49.755226 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69" Feb 13 15:28:49.756316 containerd[1484]: time="2025-02-13T15:28:49.755096891Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\"" Feb 13 15:28:49.756533 containerd[1484]: time="2025-02-13T15:28:49.756363044Z" level=info msg="TearDown network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" successfully" Feb 13 15:28:49.757145 containerd[1484]: time="2025-02-13T15:28:49.756701104Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" returns successfully" Feb 13 15:28:49.757145 containerd[1484]: time="2025-02-13T15:28:49.756621659Z" level=info msg="StopPodSandbox for \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\"" Feb 13 15:28:49.757145 containerd[1484]: time="2025-02-13T15:28:49.757093767Z" level=info msg="Ensure that sandbox f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69 in task-service has been cleanup successfully" Feb 13 15:28:49.759373 containerd[1484]: time="2025-02-13T15:28:49.759313655Z" level=info msg="TearDown network for sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" successfully" Feb 13 15:28:49.759539 containerd[1484]: time="2025-02-13T15:28:49.759491066Z" level=info msg="StopPodSandbox for \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" returns successfully" Feb 13 15:28:49.761044 containerd[1484]: time="2025-02-13T15:28:49.760475683Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\"" Feb 13 15:28:49.761044 containerd[1484]: time="2025-02-13T15:28:49.760565888Z" level=info msg="TearDown network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" successfully" Feb 13 15:28:49.761044 containerd[1484]: time="2025-02-13T15:28:49.760575089Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" returns successfully" Feb 13 15:28:49.761044 containerd[1484]: time="2025-02-13T15:28:49.760744458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:3,}" Feb 13 15:28:49.761646 containerd[1484]: time="2025-02-13T15:28:49.761615789Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\"" Feb 13 15:28:49.761761 containerd[1484]: time="2025-02-13T15:28:49.761711475Z" level=info msg="TearDown network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" successfully" Feb 13 15:28:49.761839 containerd[1484]: time="2025-02-13T15:28:49.761722515Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" returns successfully" Feb 13 15:28:49.763896 containerd[1484]: time="2025-02-13T15:28:49.763741152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:3,}" Feb 13 15:28:49.766084 kubelet[2829]: I0213 15:28:49.766046 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e" Feb 13 15:28:49.768319 containerd[1484]: time="2025-02-13T15:28:49.768153009Z" level=info msg="StopPodSandbox for \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\"" Feb 13 15:28:49.768401 containerd[1484]: time="2025-02-13T15:28:49.768331299Z" level=info msg="Ensure that sandbox 48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e in task-service has been cleanup successfully" Feb 13 15:28:49.770917 containerd[1484]: time="2025-02-13T15:28:49.770357536Z" level=info msg="TearDown network for sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" successfully" Feb 13 15:28:49.770917 containerd[1484]: time="2025-02-13T15:28:49.770641913Z" level=info msg="StopPodSandbox for \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" returns successfully" Feb 13 15:28:49.772176 containerd[1484]: time="2025-02-13T15:28:49.772090717Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\"" Feb 13 15:28:49.774433 kubelet[2829]: I0213 15:28:49.774307 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f" Feb 13 15:28:49.774792 containerd[1484]: time="2025-02-13T15:28:49.774683668Z" level=info msg="TearDown network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" successfully" Feb 13 15:28:49.774792 containerd[1484]: time="2025-02-13T15:28:49.774712229Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" returns successfully" Feb 13 15:28:49.775616 containerd[1484]: time="2025-02-13T15:28:49.775209658Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\"" Feb 13 15:28:49.776254 containerd[1484]: time="2025-02-13T15:28:49.776150673Z" level=info msg="TearDown network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" successfully" Feb 13 15:28:49.777692 containerd[1484]: time="2025-02-13T15:28:49.777609277Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" returns successfully" Feb 13 15:28:49.778748 containerd[1484]: time="2025-02-13T15:28:49.778633777Z" level=info msg="StopPodSandbox for \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\"" Feb 13 15:28:49.780844 containerd[1484]: time="2025-02-13T15:28:49.779886450Z" level=info msg="Ensure that sandbox ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f in task-service has been cleanup successfully" Feb 13 15:28:49.780844 containerd[1484]: time="2025-02-13T15:28:49.780100702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:3,}" Feb 13 15:28:49.781078 containerd[1484]: time="2025-02-13T15:28:49.781054397Z" level=info msg="TearDown network for sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" successfully" Feb 13 15:28:49.781954 containerd[1484]: time="2025-02-13T15:28:49.781495503Z" level=info msg="StopPodSandbox for \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" returns successfully" Feb 13 15:28:49.783392 containerd[1484]: time="2025-02-13T15:28:49.782618928Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\"" Feb 13 15:28:49.783392 containerd[1484]: time="2025-02-13T15:28:49.782707733Z" level=info msg="TearDown network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" successfully" Feb 13 15:28:49.783392 containerd[1484]: time="2025-02-13T15:28:49.782718454Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" returns successfully" Feb 13 15:28:49.785171 containerd[1484]: time="2025-02-13T15:28:49.785095112Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\"" Feb 13 15:28:49.786256 kubelet[2829]: I0213 15:28:49.786228 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee" Feb 13 15:28:49.787319 containerd[1484]: time="2025-02-13T15:28:49.787287559Z" level=info msg="TearDown network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" successfully" Feb 13 15:28:49.787449 containerd[1484]: time="2025-02-13T15:28:49.787432888Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" returns successfully" Feb 13 15:28:49.790142 containerd[1484]: time="2025-02-13T15:28:49.790093882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:3,}" Feb 13 15:28:49.791628 containerd[1484]: time="2025-02-13T15:28:49.790586991Z" level=info msg="StopPodSandbox for \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\"" Feb 13 15:28:49.792957 containerd[1484]: time="2025-02-13T15:28:49.792055236Z" level=info msg="Ensure that sandbox 3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee in task-service has been cleanup successfully" Feb 13 15:28:49.795332 containerd[1484]: time="2025-02-13T15:28:49.794858519Z" level=info msg="TearDown network for sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" successfully" Feb 13 15:28:49.796218 kubelet[2829]: I0213 15:28:49.795873 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343" Feb 13 15:28:49.797943 containerd[1484]: time="2025-02-13T15:28:49.797526554Z" level=info msg="StopPodSandbox for \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" returns successfully" Feb 13 15:28:49.799057 containerd[1484]: time="2025-02-13T15:28:49.797519793Z" level=info msg="StopPodSandbox for \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\"" Feb 13 15:28:49.799592 containerd[1484]: time="2025-02-13T15:28:49.799449665Z" level=info msg="Ensure that sandbox 54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343 in task-service has been cleanup successfully" Feb 13 15:28:49.801778 containerd[1484]: time="2025-02-13T15:28:49.801734398Z" level=info msg="TearDown network for sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" successfully" Feb 13 15:28:49.802531 containerd[1484]: time="2025-02-13T15:28:49.801989493Z" level=info msg="StopPodSandbox for \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" returns successfully" Feb 13 15:28:49.803400 containerd[1484]: time="2025-02-13T15:28:49.803375173Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\"" Feb 13 15:28:49.803705 containerd[1484]: time="2025-02-13T15:28:49.803621347Z" level=info msg="TearDown network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" successfully" Feb 13 15:28:49.804709 containerd[1484]: time="2025-02-13T15:28:49.803656069Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\"" Feb 13 15:28:49.804709 containerd[1484]: time="2025-02-13T15:28:49.803852441Z" level=info msg="TearDown network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" successfully" Feb 13 15:28:49.804709 containerd[1484]: time="2025-02-13T15:28:49.803865362Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" returns successfully" Feb 13 15:28:49.805461 containerd[1484]: time="2025-02-13T15:28:49.805437853Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" returns successfully" Feb 13 15:28:49.805828 containerd[1484]: time="2025-02-13T15:28:49.805806874Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\"" Feb 13 15:28:49.806044 containerd[1484]: time="2025-02-13T15:28:49.806025727Z" level=info msg="TearDown network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" successfully" Feb 13 15:28:49.806145 containerd[1484]: time="2025-02-13T15:28:49.806130613Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" returns successfully" Feb 13 15:28:49.807787 containerd[1484]: time="2025-02-13T15:28:49.807740026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:28:49.808201 containerd[1484]: time="2025-02-13T15:28:49.808177012Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\"" Feb 13 15:28:49.808391 containerd[1484]: time="2025-02-13T15:28:49.808374743Z" level=info msg="TearDown network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" successfully" Feb 13 15:28:49.808459 containerd[1484]: time="2025-02-13T15:28:49.808446507Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" returns successfully" Feb 13 15:28:49.810381 containerd[1484]: time="2025-02-13T15:28:49.810353618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:3,}" Feb 13 15:28:49.879065 systemd[1]: run-netns-cni\x2dc868cf22\x2de226\x2d660c\x2d2b88\x2d5532e3353fbb.mount: Deactivated successfully. Feb 13 15:28:49.879196 systemd[1]: run-netns-cni\x2d30f2f491\x2d6c27\x2d1298\x2d2717\x2db2863e796afb.mount: Deactivated successfully. Feb 13 15:28:49.879673 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee-shm.mount: Deactivated successfully. Feb 13 15:28:49.879770 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f-shm.mount: Deactivated successfully. Feb 13 15:28:49.879828 systemd[1]: run-netns-cni\x2d8d9c4505\x2df017\x2d52e5\x2d29f8\x2da8e4e35bd68f.mount: Deactivated successfully. Feb 13 15:28:49.879885 systemd[1]: run-netns-cni\x2df429e8e5\x2dfec9\x2d6c0a\x2d5f26\x2d6ec009f7ffdf.mount: Deactivated successfully. Feb 13 15:28:49.879936 systemd[1]: run-netns-cni\x2dfffa0989\x2d05a2\x2d66ff\x2d56d1\x2dfbdc24bad000.mount: Deactivated successfully. Feb 13 15:28:49.879985 systemd[1]: run-netns-cni\x2d48001339\x2d856b\x2d3cf7\x2dc4cd\x2d32dfa196ebbb.mount: Deactivated successfully. Feb 13 15:28:50.037866 containerd[1484]: time="2025-02-13T15:28:50.037031913Z" level=error msg="Failed to destroy network for sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.041316 containerd[1484]: time="2025-02-13T15:28:50.041099390Z" level=error msg="encountered an error cleaning up failed sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.043483 containerd[1484]: time="2025-02-13T15:28:50.043143550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.044903 kubelet[2829]: E0213 15:28:50.044835 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.044903 kubelet[2829]: E0213 15:28:50.044902 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:50.046397 kubelet[2829]: E0213 15:28:50.044925 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:50.046397 kubelet[2829]: E0213 15:28:50.044989 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" podUID="0f9f8988-12d5-4588-beb7-07ec325b3215" Feb 13 15:28:50.064210 containerd[1484]: time="2025-02-13T15:28:50.064036612Z" level=error msg="Failed to destroy network for sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.064829 containerd[1484]: time="2025-02-13T15:28:50.064793216Z" level=error msg="encountered an error cleaning up failed sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.065440 containerd[1484]: time="2025-02-13T15:28:50.065405652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.066373 kubelet[2829]: E0213 15:28:50.066162 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.066373 kubelet[2829]: E0213 15:28:50.066234 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:50.066373 kubelet[2829]: E0213 15:28:50.066255 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:50.066530 kubelet[2829]: E0213 15:28:50.066322 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-56fvw" podUID="2492456e-62fc-4caa-b4e8-3b7a3936ed4e" Feb 13 15:28:50.134008 containerd[1484]: time="2025-02-13T15:28:50.133757409Z" level=error msg="Failed to destroy network for sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.134893 containerd[1484]: time="2025-02-13T15:28:50.134819192Z" level=error msg="encountered an error cleaning up failed sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.134993 containerd[1484]: time="2025-02-13T15:28:50.134903396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.135344 kubelet[2829]: E0213 15:28:50.135173 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.135344 kubelet[2829]: E0213 15:28:50.135233 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:50.135344 kubelet[2829]: E0213 15:28:50.135259 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:50.135450 kubelet[2829]: E0213 15:28:50.135339 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" podUID="7c98ace8-be26-494a-8fb2-3660c969f424" Feb 13 15:28:50.140854 containerd[1484]: time="2025-02-13T15:28:50.140546927Z" level=error msg="Failed to destroy network for sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.142614 containerd[1484]: time="2025-02-13T15:28:50.142479080Z" level=error msg="encountered an error cleaning up failed sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.142614 containerd[1484]: time="2025-02-13T15:28:50.142572045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.143451 kubelet[2829]: E0213 15:28:50.142882 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.143451 kubelet[2829]: E0213 15:28:50.142931 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:50.143451 kubelet[2829]: E0213 15:28:50.142953 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:50.143543 kubelet[2829]: E0213 15:28:50.143003 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ngp8v" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" Feb 13 15:28:50.148811 containerd[1484]: time="2025-02-13T15:28:50.147527095Z" level=error msg="Failed to destroy network for sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.149078 containerd[1484]: time="2025-02-13T15:28:50.148959619Z" level=error msg="Failed to destroy network for sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.150230 containerd[1484]: time="2025-02-13T15:28:50.150006360Z" level=error msg="encountered an error cleaning up failed sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.150230 containerd[1484]: time="2025-02-13T15:28:50.150085124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.150652 kubelet[2829]: E0213 15:28:50.150504 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.150652 kubelet[2829]: E0213 15:28:50.150567 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:50.150652 kubelet[2829]: E0213 15:28:50.150589 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:50.150780 kubelet[2829]: E0213 15:28:50.150646 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" podUID="5df25a74-af5d-4f05-b3c4-95a12fc65600" Feb 13 15:28:50.151866 containerd[1484]: time="2025-02-13T15:28:50.151818546Z" level=error msg="encountered an error cleaning up failed sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.151933 containerd[1484]: time="2025-02-13T15:28:50.151898310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.153340 kubelet[2829]: E0213 15:28:50.152364 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:50.153340 kubelet[2829]: E0213 15:28:50.152424 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:50.153340 kubelet[2829]: E0213 15:28:50.152444 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:50.153501 kubelet[2829]: E0213 15:28:50.152502 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vfsrv" podUID="0f89f8e5-fc39-4122-94aa-c93e88296236" Feb 13 15:28:50.810837 kubelet[2829]: I0213 15:28:50.810800 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071" Feb 13 15:28:50.813037 containerd[1484]: time="2025-02-13T15:28:50.811784064Z" level=info msg="StopPodSandbox for \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\"" Feb 13 15:28:50.813037 containerd[1484]: time="2025-02-13T15:28:50.812060200Z" level=info msg="Ensure that sandbox 4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071 in task-service has been cleanup successfully" Feb 13 15:28:50.817083 containerd[1484]: time="2025-02-13T15:28:50.817027650Z" level=info msg="TearDown network for sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\" successfully" Feb 13 15:28:50.817439 containerd[1484]: time="2025-02-13T15:28:50.817290546Z" level=info msg="StopPodSandbox for \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\" returns successfully" Feb 13 15:28:50.819625 containerd[1484]: time="2025-02-13T15:28:50.819109692Z" level=info msg="StopPodSandbox for \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\"" Feb 13 15:28:50.819946 containerd[1484]: time="2025-02-13T15:28:50.819925500Z" level=info msg="TearDown network for sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" successfully" Feb 13 15:28:50.820121 containerd[1484]: time="2025-02-13T15:28:50.820035586Z" level=info msg="StopPodSandbox for \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" returns successfully" Feb 13 15:28:50.820830 containerd[1484]: time="2025-02-13T15:28:50.820717946Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\"" Feb 13 15:28:50.820830 containerd[1484]: time="2025-02-13T15:28:50.820818192Z" level=info msg="TearDown network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" successfully" Feb 13 15:28:50.820830 containerd[1484]: time="2025-02-13T15:28:50.820828113Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" returns successfully" Feb 13 15:28:50.821819 containerd[1484]: time="2025-02-13T15:28:50.821656801Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\"" Feb 13 15:28:50.821819 containerd[1484]: time="2025-02-13T15:28:50.821747726Z" level=info msg="TearDown network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" successfully" Feb 13 15:28:50.821819 containerd[1484]: time="2025-02-13T15:28:50.821757207Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" returns successfully" Feb 13 15:28:50.822943 kubelet[2829]: I0213 15:28:50.822196 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff" Feb 13 15:28:50.823082 containerd[1484]: time="2025-02-13T15:28:50.822424206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:4,}" Feb 13 15:28:50.825036 containerd[1484]: time="2025-02-13T15:28:50.824998316Z" level=info msg="StopPodSandbox for \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\"" Feb 13 15:28:50.825731 containerd[1484]: time="2025-02-13T15:28:50.825686597Z" level=info msg="Ensure that sandbox 3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff in task-service has been cleanup successfully" Feb 13 15:28:50.826472 containerd[1484]: time="2025-02-13T15:28:50.826444001Z" level=info msg="TearDown network for sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\" successfully" Feb 13 15:28:50.826949 containerd[1484]: time="2025-02-13T15:28:50.826780221Z" level=info msg="StopPodSandbox for \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\" returns successfully" Feb 13 15:28:50.830419 containerd[1484]: time="2025-02-13T15:28:50.830034931Z" level=info msg="StopPodSandbox for \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\"" Feb 13 15:28:50.832491 containerd[1484]: time="2025-02-13T15:28:50.831489496Z" level=info msg="TearDown network for sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" successfully" Feb 13 15:28:50.832491 containerd[1484]: time="2025-02-13T15:28:50.831523458Z" level=info msg="StopPodSandbox for \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" returns successfully" Feb 13 15:28:50.833815 containerd[1484]: time="2025-02-13T15:28:50.833637862Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\"" Feb 13 15:28:50.834447 containerd[1484]: time="2025-02-13T15:28:50.834318301Z" level=info msg="TearDown network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" successfully" Feb 13 15:28:50.834447 containerd[1484]: time="2025-02-13T15:28:50.834346623Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" returns successfully" Feb 13 15:28:50.835936 containerd[1484]: time="2025-02-13T15:28:50.835537413Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\"" Feb 13 15:28:50.835936 containerd[1484]: time="2025-02-13T15:28:50.835646739Z" level=info msg="TearDown network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" successfully" Feb 13 15:28:50.835936 containerd[1484]: time="2025-02-13T15:28:50.835656740Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" returns successfully" Feb 13 15:28:50.838747 kubelet[2829]: I0213 15:28:50.837575 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554" Feb 13 15:28:50.841693 containerd[1484]: time="2025-02-13T15:28:50.841651690Z" level=info msg="StopPodSandbox for \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\"" Feb 13 15:28:50.845392 containerd[1484]: time="2025-02-13T15:28:50.845219779Z" level=info msg="Ensure that sandbox 1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554 in task-service has been cleanup successfully" Feb 13 15:28:50.846189 containerd[1484]: time="2025-02-13T15:28:50.842153360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:4,}" Feb 13 15:28:50.846523 containerd[1484]: time="2025-02-13T15:28:50.846494974Z" level=info msg="TearDown network for sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\" successfully" Feb 13 15:28:50.846646 containerd[1484]: time="2025-02-13T15:28:50.846629461Z" level=info msg="StopPodSandbox for \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\" returns successfully" Feb 13 15:28:50.848584 containerd[1484]: time="2025-02-13T15:28:50.848450088Z" level=info msg="StopPodSandbox for \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\"" Feb 13 15:28:50.848584 containerd[1484]: time="2025-02-13T15:28:50.848586576Z" level=info msg="TearDown network for sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" successfully" Feb 13 15:28:50.848708 containerd[1484]: time="2025-02-13T15:28:50.848599817Z" level=info msg="StopPodSandbox for \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" returns successfully" Feb 13 15:28:50.850054 containerd[1484]: time="2025-02-13T15:28:50.849784246Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\"" Feb 13 15:28:50.850054 containerd[1484]: time="2025-02-13T15:28:50.849946735Z" level=info msg="TearDown network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" successfully" Feb 13 15:28:50.850054 containerd[1484]: time="2025-02-13T15:28:50.849959176Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" returns successfully" Feb 13 15:28:50.851729 containerd[1484]: time="2025-02-13T15:28:50.851390060Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\"" Feb 13 15:28:50.851729 containerd[1484]: time="2025-02-13T15:28:50.851701238Z" level=info msg="TearDown network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" successfully" Feb 13 15:28:50.851729 containerd[1484]: time="2025-02-13T15:28:50.851718079Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" returns successfully" Feb 13 15:28:50.852854 containerd[1484]: time="2025-02-13T15:28:50.852524326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:4,}" Feb 13 15:28:50.854301 kubelet[2829]: I0213 15:28:50.853514 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a" Feb 13 15:28:50.855314 containerd[1484]: time="2025-02-13T15:28:50.854896425Z" level=info msg="StopPodSandbox for \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\"" Feb 13 15:28:50.855314 containerd[1484]: time="2025-02-13T15:28:50.855079396Z" level=info msg="Ensure that sandbox 173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a in task-service has been cleanup successfully" Feb 13 15:28:50.856369 containerd[1484]: time="2025-02-13T15:28:50.856329949Z" level=info msg="TearDown network for sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\" successfully" Feb 13 15:28:50.856607 containerd[1484]: time="2025-02-13T15:28:50.856585844Z" level=info msg="StopPodSandbox for \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\" returns successfully" Feb 13 15:28:50.860008 containerd[1484]: time="2025-02-13T15:28:50.859967322Z" level=info msg="StopPodSandbox for \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\"" Feb 13 15:28:50.860457 containerd[1484]: time="2025-02-13T15:28:50.860433669Z" level=info msg="TearDown network for sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" successfully" Feb 13 15:28:50.861035 containerd[1484]: time="2025-02-13T15:28:50.861005902Z" level=info msg="StopPodSandbox for \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" returns successfully" Feb 13 15:28:50.862195 containerd[1484]: time="2025-02-13T15:28:50.862140009Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\"" Feb 13 15:28:50.862599 containerd[1484]: time="2025-02-13T15:28:50.862392823Z" level=info msg="TearDown network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" successfully" Feb 13 15:28:50.862599 containerd[1484]: time="2025-02-13T15:28:50.862414705Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" returns successfully" Feb 13 15:28:50.863763 containerd[1484]: time="2025-02-13T15:28:50.863566572Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\"" Feb 13 15:28:50.864170 containerd[1484]: time="2025-02-13T15:28:50.864129525Z" level=info msg="TearDown network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" successfully" Feb 13 15:28:50.865012 containerd[1484]: time="2025-02-13T15:28:50.864781363Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" returns successfully" Feb 13 15:28:50.866104 containerd[1484]: time="2025-02-13T15:28:50.865773381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:4,}" Feb 13 15:28:50.866926 kubelet[2829]: I0213 15:28:50.866857 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579" Feb 13 15:28:50.869030 containerd[1484]: time="2025-02-13T15:28:50.868969088Z" level=info msg="StopPodSandbox for \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\"" Feb 13 15:28:50.869643 containerd[1484]: time="2025-02-13T15:28:50.869602165Z" level=info msg="Ensure that sandbox 04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579 in task-service has been cleanup successfully" Feb 13 15:28:50.873605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492-shm.mount: Deactivated successfully. Feb 13 15:28:50.873760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579-shm.mount: Deactivated successfully. Feb 13 15:28:50.873814 systemd[1]: run-netns-cni\x2d5cca8aff\x2d448e\x2de995\x2d852f\x2dbaaebb72606d.mount: Deactivated successfully. Feb 13 15:28:50.873881 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a-shm.mount: Deactivated successfully. Feb 13 15:28:50.873938 systemd[1]: run-netns-cni\x2d35645838\x2df9ae\x2dde48\x2dc577\x2d6d87eb50ab31.mount: Deactivated successfully. Feb 13 15:28:50.873982 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554-shm.mount: Deactivated successfully. Feb 13 15:28:50.874029 systemd[1]: run-netns-cni\x2dbef4caa1\x2d4c2a\x2d6d0e\x2d3536\x2d666d6b1a25f8.mount: Deactivated successfully. Feb 13 15:28:50.874076 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071-shm.mount: Deactivated successfully. Feb 13 15:28:50.874122 systemd[1]: run-netns-cni\x2dbdc3b8a7\x2d65f1\x2dd975\x2d3bb1\x2d11a42adea066.mount: Deactivated successfully. Feb 13 15:28:50.874179 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff-shm.mount: Deactivated successfully. Feb 13 15:28:50.881568 containerd[1484]: time="2025-02-13T15:28:50.880437479Z" level=info msg="TearDown network for sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\" successfully" Feb 13 15:28:50.881568 containerd[1484]: time="2025-02-13T15:28:50.880488802Z" level=info msg="StopPodSandbox for \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\" returns successfully" Feb 13 15:28:50.883950 containerd[1484]: time="2025-02-13T15:28:50.883907322Z" level=info msg="StopPodSandbox for \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\"" Feb 13 15:28:50.884328 containerd[1484]: time="2025-02-13T15:28:50.884079932Z" level=info msg="TearDown network for sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" successfully" Feb 13 15:28:50.884328 containerd[1484]: time="2025-02-13T15:28:50.884097413Z" level=info msg="StopPodSandbox for \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" returns successfully" Feb 13 15:28:50.885737 systemd[1]: run-netns-cni\x2d2ab7aae1\x2dd135\x2d99ef\x2d5587\x2d0c9e05162ea6.mount: Deactivated successfully. Feb 13 15:28:50.888668 containerd[1484]: time="2025-02-13T15:28:50.887517653Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\"" Feb 13 15:28:50.888668 containerd[1484]: time="2025-02-13T15:28:50.888306739Z" level=info msg="TearDown network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" successfully" Feb 13 15:28:50.888668 containerd[1484]: time="2025-02-13T15:28:50.888329820Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" returns successfully" Feb 13 15:28:50.888966 kubelet[2829]: I0213 15:28:50.888773 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492" Feb 13 15:28:50.889410 containerd[1484]: time="2025-02-13T15:28:50.889373881Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\"" Feb 13 15:28:50.889561 containerd[1484]: time="2025-02-13T15:28:50.889485568Z" level=info msg="TearDown network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" successfully" Feb 13 15:28:50.889561 containerd[1484]: time="2025-02-13T15:28:50.889501169Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" returns successfully" Feb 13 15:28:50.892832 containerd[1484]: time="2025-02-13T15:28:50.892781201Z" level=info msg="StopPodSandbox for \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\"" Feb 13 15:28:50.893151 containerd[1484]: time="2025-02-13T15:28:50.892981612Z" level=info msg="Ensure that sandbox d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492 in task-service has been cleanup successfully" Feb 13 15:28:50.895808 systemd[1]: run-netns-cni\x2d981e524c\x2d6492\x2dea5b\x2d2adb\x2db95ce0f3f06d.mount: Deactivated successfully. Feb 13 15:28:50.897451 containerd[1484]: time="2025-02-13T15:28:50.896109875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:28:50.897609 containerd[1484]: time="2025-02-13T15:28:50.897555800Z" level=info msg="TearDown network for sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\" successfully" Feb 13 15:28:50.897685 containerd[1484]: time="2025-02-13T15:28:50.897668887Z" level=info msg="StopPodSandbox for \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\" returns successfully" Feb 13 15:28:50.900467 containerd[1484]: time="2025-02-13T15:28:50.900412447Z" level=info msg="StopPodSandbox for \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\"" Feb 13 15:28:50.900626 containerd[1484]: time="2025-02-13T15:28:50.900579977Z" level=info msg="TearDown network for sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" successfully" Feb 13 15:28:50.900626 containerd[1484]: time="2025-02-13T15:28:50.900592217Z" level=info msg="StopPodSandbox for \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" returns successfully" Feb 13 15:28:50.901339 containerd[1484]: time="2025-02-13T15:28:50.901305579Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\"" Feb 13 15:28:50.902650 containerd[1484]: time="2025-02-13T15:28:50.902606695Z" level=info msg="TearDown network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" successfully" Feb 13 15:28:50.902650 containerd[1484]: time="2025-02-13T15:28:50.902641217Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" returns successfully" Feb 13 15:28:50.905978 containerd[1484]: time="2025-02-13T15:28:50.905811523Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\"" Feb 13 15:28:50.906400 containerd[1484]: time="2025-02-13T15:28:50.906151943Z" level=info msg="TearDown network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" successfully" Feb 13 15:28:50.906400 containerd[1484]: time="2025-02-13T15:28:50.906396517Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" returns successfully" Feb 13 15:28:50.908103 containerd[1484]: time="2025-02-13T15:28:50.907888244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:4,}" Feb 13 15:28:51.144434 containerd[1484]: time="2025-02-13T15:28:51.143467562Z" level=error msg="Failed to destroy network for sandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.145777 containerd[1484]: time="2025-02-13T15:28:51.145155782Z" level=error msg="encountered an error cleaning up failed sandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.145777 containerd[1484]: time="2025-02-13T15:28:51.145245667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.146358 kubelet[2829]: E0213 15:28:51.145626 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.146358 kubelet[2829]: E0213 15:28:51.145682 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:51.146358 kubelet[2829]: E0213 15:28:51.145704 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:51.146485 kubelet[2829]: E0213 15:28:51.145758 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" podUID="0f9f8988-12d5-4588-beb7-07ec325b3215" Feb 13 15:28:51.171163 containerd[1484]: time="2025-02-13T15:28:51.170943581Z" level=error msg="Failed to destroy network for sandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.172622 containerd[1484]: time="2025-02-13T15:28:51.172568077Z" level=error msg="encountered an error cleaning up failed sandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.173212 containerd[1484]: time="2025-02-13T15:28:51.172950379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.174354 kubelet[2829]: E0213 15:28:51.174319 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.174450 kubelet[2829]: E0213 15:28:51.174384 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:51.174450 kubelet[2829]: E0213 15:28:51.174408 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:51.174450 kubelet[2829]: E0213 15:28:51.174470 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" podUID="5df25a74-af5d-4f05-b3c4-95a12fc65600" Feb 13 15:28:51.176724 containerd[1484]: time="2025-02-13T15:28:51.176681039Z" level=error msg="Failed to destroy network for sandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.177301 containerd[1484]: time="2025-02-13T15:28:51.177197869Z" level=error msg="encountered an error cleaning up failed sandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.177462 containerd[1484]: time="2025-02-13T15:28:51.177426443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.178410 kubelet[2829]: E0213 15:28:51.178382 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.178484 kubelet[2829]: E0213 15:28:51.178439 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:51.178484 kubelet[2829]: E0213 15:28:51.178463 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:51.178547 kubelet[2829]: E0213 15:28:51.178512 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-56fvw" podUID="2492456e-62fc-4caa-b4e8-3b7a3936ed4e" Feb 13 15:28:51.183653 containerd[1484]: time="2025-02-13T15:28:51.183480559Z" level=error msg="Failed to destroy network for sandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.184708 containerd[1484]: time="2025-02-13T15:28:51.184593425Z" level=error msg="encountered an error cleaning up failed sandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.185867 containerd[1484]: time="2025-02-13T15:28:51.185830978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.187596 kubelet[2829]: E0213 15:28:51.186427 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.187596 kubelet[2829]: E0213 15:28:51.186479 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:51.187596 kubelet[2829]: E0213 15:28:51.186498 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:51.187759 kubelet[2829]: E0213 15:28:51.186557 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vfsrv" podUID="0f89f8e5-fc39-4122-94aa-c93e88296236" Feb 13 15:28:51.198337 containerd[1484]: time="2025-02-13T15:28:51.198254670Z" level=error msg="Failed to destroy network for sandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.200076 containerd[1484]: time="2025-02-13T15:28:51.198879307Z" level=error msg="encountered an error cleaning up failed sandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.200218 containerd[1484]: time="2025-02-13T15:28:51.200115939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.200478 kubelet[2829]: E0213 15:28:51.200454 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.200544 kubelet[2829]: E0213 15:28:51.200516 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:51.200544 kubelet[2829]: E0213 15:28:51.200542 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:51.200616 kubelet[2829]: E0213 15:28:51.200601 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" podUID="7c98ace8-be26-494a-8fb2-3660c969f424" Feb 13 15:28:51.216878 containerd[1484]: time="2025-02-13T15:28:51.216582469Z" level=error msg="Failed to destroy network for sandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.217678 containerd[1484]: time="2025-02-13T15:28:51.217578128Z" level=error msg="encountered an error cleaning up failed sandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.217818 containerd[1484]: time="2025-02-13T15:28:51.217684214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.218258 kubelet[2829]: E0213 15:28:51.218117 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:51.218258 kubelet[2829]: E0213 15:28:51.218213 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:51.218258 kubelet[2829]: E0213 15:28:51.218251 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:51.219022 kubelet[2829]: E0213 15:28:51.218712 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ngp8v" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" Feb 13 15:28:51.873917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498-shm.mount: Deactivated successfully. Feb 13 15:28:51.874031 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d-shm.mount: Deactivated successfully. Feb 13 15:28:51.895300 kubelet[2829]: I0213 15:28:51.894532 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7" Feb 13 15:28:51.895779 containerd[1484]: time="2025-02-13T15:28:51.895351374Z" level=info msg="StopPodSandbox for \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\"" Feb 13 15:28:51.895779 containerd[1484]: time="2025-02-13T15:28:51.895536185Z" level=info msg="Ensure that sandbox d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7 in task-service has been cleanup successfully" Feb 13 15:28:51.898910 containerd[1484]: time="2025-02-13T15:28:51.896309230Z" level=info msg="TearDown network for sandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\" successfully" Feb 13 15:28:51.898910 containerd[1484]: time="2025-02-13T15:28:51.896338912Z" level=info msg="StopPodSandbox for \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\" returns successfully" Feb 13 15:28:51.898910 containerd[1484]: time="2025-02-13T15:28:51.898984988Z" level=info msg="StopPodSandbox for \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\"" Feb 13 15:28:51.898910 containerd[1484]: time="2025-02-13T15:28:51.899080193Z" level=info msg="TearDown network for sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\" successfully" Feb 13 15:28:51.898910 containerd[1484]: time="2025-02-13T15:28:51.899091314Z" level=info msg="StopPodSandbox for \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\" returns successfully" Feb 13 15:28:51.899788 containerd[1484]: time="2025-02-13T15:28:51.899690309Z" level=info msg="StopPodSandbox for \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\"" Feb 13 15:28:51.899788 containerd[1484]: time="2025-02-13T15:28:51.899763674Z" level=info msg="TearDown network for sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" successfully" Feb 13 15:28:51.899788 containerd[1484]: time="2025-02-13T15:28:51.899772274Z" level=info msg="StopPodSandbox for \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" returns successfully" Feb 13 15:28:51.900441 systemd[1]: run-netns-cni\x2d65730e70\x2dbc78\x2defa7\x2db4ca\x2d1d94173add57.mount: Deactivated successfully. Feb 13 15:28:51.901246 containerd[1484]: time="2025-02-13T15:28:51.900833377Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\"" Feb 13 15:28:51.901246 containerd[1484]: time="2025-02-13T15:28:51.900914421Z" level=info msg="TearDown network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" successfully" Feb 13 15:28:51.901246 containerd[1484]: time="2025-02-13T15:28:51.900923702Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" returns successfully" Feb 13 15:28:51.902252 containerd[1484]: time="2025-02-13T15:28:51.901483295Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\"" Feb 13 15:28:51.902252 containerd[1484]: time="2025-02-13T15:28:51.901560139Z" level=info msg="TearDown network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" successfully" Feb 13 15:28:51.902252 containerd[1484]: time="2025-02-13T15:28:51.901635104Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" returns successfully" Feb 13 15:28:51.904153 containerd[1484]: time="2025-02-13T15:28:51.903641222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:28:51.905137 kubelet[2829]: I0213 15:28:51.905109 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d" Feb 13 15:28:51.906680 containerd[1484]: time="2025-02-13T15:28:51.906639879Z" level=info msg="StopPodSandbox for \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\"" Feb 13 15:28:51.906895 containerd[1484]: time="2025-02-13T15:28:51.906806889Z" level=info msg="Ensure that sandbox 1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d in task-service has been cleanup successfully" Feb 13 15:28:51.910825 systemd[1]: run-netns-cni\x2d96606e92\x2d7ade\x2dc58b\x2ddc50\x2d14441da61084.mount: Deactivated successfully. Feb 13 15:28:51.913137 containerd[1484]: time="2025-02-13T15:28:51.913015294Z" level=info msg="TearDown network for sandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\" successfully" Feb 13 15:28:51.913137 containerd[1484]: time="2025-02-13T15:28:51.913050016Z" level=info msg="StopPodSandbox for \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\" returns successfully" Feb 13 15:28:51.913686 containerd[1484]: time="2025-02-13T15:28:51.913657692Z" level=info msg="StopPodSandbox for \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\"" Feb 13 15:28:51.914173 containerd[1484]: time="2025-02-13T15:28:51.913867624Z" level=info msg="TearDown network for sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\" successfully" Feb 13 15:28:51.914173 containerd[1484]: time="2025-02-13T15:28:51.914154081Z" level=info msg="StopPodSandbox for \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\" returns successfully" Feb 13 15:28:51.915499 containerd[1484]: time="2025-02-13T15:28:51.915189942Z" level=info msg="StopPodSandbox for \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\"" Feb 13 15:28:51.915499 containerd[1484]: time="2025-02-13T15:28:51.915374393Z" level=info msg="TearDown network for sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" successfully" Feb 13 15:28:51.915499 containerd[1484]: time="2025-02-13T15:28:51.915386954Z" level=info msg="StopPodSandbox for \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" returns successfully" Feb 13 15:28:51.917621 containerd[1484]: time="2025-02-13T15:28:51.916708512Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\"" Feb 13 15:28:51.917621 containerd[1484]: time="2025-02-13T15:28:51.916781556Z" level=info msg="TearDown network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" successfully" Feb 13 15:28:51.917621 containerd[1484]: time="2025-02-13T15:28:51.916790197Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" returns successfully" Feb 13 15:28:51.917621 containerd[1484]: time="2025-02-13T15:28:51.917354550Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\"" Feb 13 15:28:51.917621 containerd[1484]: time="2025-02-13T15:28:51.917470157Z" level=info msg="TearDown network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" successfully" Feb 13 15:28:51.917621 containerd[1484]: time="2025-02-13T15:28:51.917545801Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" returns successfully" Feb 13 15:28:51.918885 containerd[1484]: time="2025-02-13T15:28:51.918568501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:5,}" Feb 13 15:28:51.920919 kubelet[2829]: I0213 15:28:51.920898 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498" Feb 13 15:28:51.923772 containerd[1484]: time="2025-02-13T15:28:51.923180213Z" level=info msg="StopPodSandbox for \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\"" Feb 13 15:28:51.923772 containerd[1484]: time="2025-02-13T15:28:51.923387825Z" level=info msg="Ensure that sandbox 0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498 in task-service has been cleanup successfully" Feb 13 15:28:51.925953 containerd[1484]: time="2025-02-13T15:28:51.925925895Z" level=info msg="TearDown network for sandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\" successfully" Feb 13 15:28:51.926131 containerd[1484]: time="2025-02-13T15:28:51.926010140Z" level=info msg="StopPodSandbox for \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\" returns successfully" Feb 13 15:28:51.926874 systemd[1]: run-netns-cni\x2dd17f5ec9\x2dd16a\x2d0c58\x2d1529\x2d01b9b3fd19d8.mount: Deactivated successfully. Feb 13 15:28:51.928849 containerd[1484]: time="2025-02-13T15:28:51.928740981Z" level=info msg="StopPodSandbox for \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\"" Feb 13 15:28:51.929148 containerd[1484]: time="2025-02-13T15:28:51.929050999Z" level=info msg="TearDown network for sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\" successfully" Feb 13 15:28:51.929148 containerd[1484]: time="2025-02-13T15:28:51.929080561Z" level=info msg="StopPodSandbox for \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\" returns successfully" Feb 13 15:28:51.930610 containerd[1484]: time="2025-02-13T15:28:51.930386998Z" level=info msg="StopPodSandbox for \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\"" Feb 13 15:28:51.931433 containerd[1484]: time="2025-02-13T15:28:51.931411258Z" level=info msg="TearDown network for sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" successfully" Feb 13 15:28:51.931575 containerd[1484]: time="2025-02-13T15:28:51.931471341Z" level=info msg="StopPodSandbox for \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" returns successfully" Feb 13 15:28:51.932826 containerd[1484]: time="2025-02-13T15:28:51.932318391Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\"" Feb 13 15:28:51.932826 containerd[1484]: time="2025-02-13T15:28:51.932696454Z" level=info msg="TearDown network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" successfully" Feb 13 15:28:51.932826 containerd[1484]: time="2025-02-13T15:28:51.932733856Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" returns successfully" Feb 13 15:28:51.932942 kubelet[2829]: I0213 15:28:51.932351 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492" Feb 13 15:28:51.938906 containerd[1484]: time="2025-02-13T15:28:51.938852296Z" level=info msg="StopPodSandbox for \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\"" Feb 13 15:28:51.939484 containerd[1484]: time="2025-02-13T15:28:51.939436331Z" level=info msg="Ensure that sandbox 7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492 in task-service has been cleanup successfully" Feb 13 15:28:51.943578 containerd[1484]: time="2025-02-13T15:28:51.943474689Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\"" Feb 13 15:28:51.944286 containerd[1484]: time="2025-02-13T15:28:51.944132647Z" level=info msg="TearDown network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" successfully" Feb 13 15:28:51.944286 containerd[1484]: time="2025-02-13T15:28:51.944163289Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" returns successfully" Feb 13 15:28:51.944604 containerd[1484]: time="2025-02-13T15:28:51.944502909Z" level=info msg="TearDown network for sandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\" successfully" Feb 13 15:28:51.944604 containerd[1484]: time="2025-02-13T15:28:51.944533151Z" level=info msg="StopPodSandbox for \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\" returns successfully" Feb 13 15:28:51.946081 containerd[1484]: time="2025-02-13T15:28:51.946047000Z" level=info msg="StopPodSandbox for \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\"" Feb 13 15:28:51.946174 containerd[1484]: time="2025-02-13T15:28:51.946160407Z" level=info msg="TearDown network for sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\" successfully" Feb 13 15:28:51.946174 containerd[1484]: time="2025-02-13T15:28:51.946171127Z" level=info msg="StopPodSandbox for \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\" returns successfully" Feb 13 15:28:51.947123 containerd[1484]: time="2025-02-13T15:28:51.947004656Z" level=info msg="StopPodSandbox for \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\"" Feb 13 15:28:51.947123 containerd[1484]: time="2025-02-13T15:28:51.947084821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:5,}" Feb 13 15:28:51.948860 containerd[1484]: time="2025-02-13T15:28:51.947102862Z" level=info msg="TearDown network for sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" successfully" Feb 13 15:28:51.948860 containerd[1484]: time="2025-02-13T15:28:51.948261731Z" level=info msg="StopPodSandbox for \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" returns successfully" Feb 13 15:28:51.952333 containerd[1484]: time="2025-02-13T15:28:51.951676452Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\"" Feb 13 15:28:51.960327 containerd[1484]: time="2025-02-13T15:28:51.959741887Z" level=info msg="TearDown network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" successfully" Feb 13 15:28:51.960327 containerd[1484]: time="2025-02-13T15:28:51.959784529Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" returns successfully" Feb 13 15:28:51.960940 containerd[1484]: time="2025-02-13T15:28:51.960900755Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\"" Feb 13 15:28:51.961570 containerd[1484]: time="2025-02-13T15:28:51.961544633Z" level=info msg="TearDown network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" successfully" Feb 13 15:28:51.962184 containerd[1484]: time="2025-02-13T15:28:51.962129947Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" returns successfully" Feb 13 15:28:51.964526 containerd[1484]: time="2025-02-13T15:28:51.964483006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:5,}" Feb 13 15:28:51.967901 kubelet[2829]: I0213 15:28:51.966867 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0" Feb 13 15:28:51.973646 containerd[1484]: time="2025-02-13T15:28:51.973593863Z" level=info msg="StopPodSandbox for \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\"" Feb 13 15:28:51.973830 containerd[1484]: time="2025-02-13T15:28:51.973808995Z" level=info msg="Ensure that sandbox 94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0 in task-service has been cleanup successfully" Feb 13 15:28:51.980132 containerd[1484]: time="2025-02-13T15:28:51.980097686Z" level=info msg="TearDown network for sandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\" successfully" Feb 13 15:28:51.980770 containerd[1484]: time="2025-02-13T15:28:51.980734363Z" level=info msg="StopPodSandbox for \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\" returns successfully" Feb 13 15:28:51.982339 containerd[1484]: time="2025-02-13T15:28:51.982067922Z" level=info msg="StopPodSandbox for \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\"" Feb 13 15:28:51.983200 containerd[1484]: time="2025-02-13T15:28:51.983165187Z" level=info msg="TearDown network for sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\" successfully" Feb 13 15:28:51.983524 containerd[1484]: time="2025-02-13T15:28:51.983494486Z" level=info msg="StopPodSandbox for \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\" returns successfully" Feb 13 15:28:51.985014 containerd[1484]: time="2025-02-13T15:28:51.984986094Z" level=info msg="StopPodSandbox for \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\"" Feb 13 15:28:51.985952 containerd[1484]: time="2025-02-13T15:28:51.985924829Z" level=info msg="TearDown network for sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" successfully" Feb 13 15:28:51.986124 containerd[1484]: time="2025-02-13T15:28:51.986105120Z" level=info msg="StopPodSandbox for \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" returns successfully" Feb 13 15:28:51.986630 kubelet[2829]: I0213 15:28:51.986608 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8" Feb 13 15:28:51.987696 containerd[1484]: time="2025-02-13T15:28:51.987669692Z" level=info msg="StopPodSandbox for \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\"" Feb 13 15:28:51.988042 containerd[1484]: time="2025-02-13T15:28:51.988021473Z" level=info msg="Ensure that sandbox 1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8 in task-service has been cleanup successfully" Feb 13 15:28:51.988167 containerd[1484]: time="2025-02-13T15:28:51.987875784Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\"" Feb 13 15:28:51.989088 containerd[1484]: time="2025-02-13T15:28:51.989067214Z" level=info msg="TearDown network for sandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\" successfully" Feb 13 15:28:51.989264 containerd[1484]: time="2025-02-13T15:28:51.989200902Z" level=info msg="StopPodSandbox for \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\" returns successfully" Feb 13 15:28:51.989471 containerd[1484]: time="2025-02-13T15:28:51.989395634Z" level=info msg="TearDown network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" successfully" Feb 13 15:28:51.989471 containerd[1484]: time="2025-02-13T15:28:51.989460637Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" returns successfully" Feb 13 15:28:51.991299 containerd[1484]: time="2025-02-13T15:28:51.991238742Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\"" Feb 13 15:28:51.991536 containerd[1484]: time="2025-02-13T15:28:51.991509918Z" level=info msg="TearDown network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" successfully" Feb 13 15:28:51.991745 containerd[1484]: time="2025-02-13T15:28:51.991603764Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" returns successfully" Feb 13 15:28:51.991883 containerd[1484]: time="2025-02-13T15:28:51.991833937Z" level=info msg="StopPodSandbox for \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\"" Feb 13 15:28:51.991951 containerd[1484]: time="2025-02-13T15:28:51.991932023Z" level=info msg="TearDown network for sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\" successfully" Feb 13 15:28:51.991951 containerd[1484]: time="2025-02-13T15:28:51.991942504Z" level=info msg="StopPodSandbox for \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\" returns successfully" Feb 13 15:28:51.996297 containerd[1484]: time="2025-02-13T15:28:51.994800352Z" level=info msg="StopPodSandbox for \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\"" Feb 13 15:28:51.996297 containerd[1484]: time="2025-02-13T15:28:51.994920879Z" level=info msg="TearDown network for sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" successfully" Feb 13 15:28:51.996297 containerd[1484]: time="2025-02-13T15:28:51.994934200Z" level=info msg="StopPodSandbox for \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" returns successfully" Feb 13 15:28:51.996297 containerd[1484]: time="2025-02-13T15:28:51.995040006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:5,}" Feb 13 15:28:51.997432 containerd[1484]: time="2025-02-13T15:28:51.997394545Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\"" Feb 13 15:28:51.997508 containerd[1484]: time="2025-02-13T15:28:51.997501031Z" level=info msg="TearDown network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" successfully" Feb 13 15:28:51.997537 containerd[1484]: time="2025-02-13T15:28:51.997511272Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" returns successfully" Feb 13 15:28:51.998553 containerd[1484]: time="2025-02-13T15:28:51.998343401Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\"" Feb 13 15:28:51.999534 containerd[1484]: time="2025-02-13T15:28:51.999260855Z" level=info msg="TearDown network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" successfully" Feb 13 15:28:51.999534 containerd[1484]: time="2025-02-13T15:28:51.999528310Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" returns successfully" Feb 13 15:28:52.000795 containerd[1484]: time="2025-02-13T15:28:52.000756223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:5,}" Feb 13 15:28:52.164658 containerd[1484]: time="2025-02-13T15:28:52.164607382Z" level=error msg="Failed to destroy network for sandbox \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.172422 containerd[1484]: time="2025-02-13T15:28:52.172372963Z" level=error msg="encountered an error cleaning up failed sandbox \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.173100 containerd[1484]: time="2025-02-13T15:28:52.173061964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.173599 kubelet[2829]: E0213 15:28:52.173363 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.173599 kubelet[2829]: E0213 15:28:52.173420 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:52.173599 kubelet[2829]: E0213 15:28:52.173442 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" Feb 13 15:28:52.173851 kubelet[2829]: E0213 15:28:52.173497 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dfcc87966-z4lb7_calico-system(0f9f8988-12d5-4588-beb7-07ec325b3215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" podUID="0f9f8988-12d5-4588-beb7-07ec325b3215" Feb 13 15:28:52.183320 containerd[1484]: time="2025-02-13T15:28:52.183195725Z" level=error msg="Failed to destroy network for sandbox \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.185258 containerd[1484]: time="2025-02-13T15:28:52.185191163Z" level=error msg="encountered an error cleaning up failed sandbox \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.186084 containerd[1484]: time="2025-02-13T15:28:52.185296369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.186994 kubelet[2829]: E0213 15:28:52.186644 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.186994 kubelet[2829]: E0213 15:28:52.186695 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:52.186994 kubelet[2829]: E0213 15:28:52.186714 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" Feb 13 15:28:52.187156 kubelet[2829]: E0213 15:28:52.186772 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-flt7v_calico-apiserver(7c98ace8-be26-494a-8fb2-3660c969f424)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" podUID="7c98ace8-be26-494a-8fb2-3660c969f424" Feb 13 15:28:52.229516 containerd[1484]: time="2025-02-13T15:28:52.229263097Z" level=error msg="Failed to destroy network for sandbox \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.231844 containerd[1484]: time="2025-02-13T15:28:52.231786847Z" level=error msg="encountered an error cleaning up failed sandbox \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.232419 containerd[1484]: time="2025-02-13T15:28:52.232376762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.232641 kubelet[2829]: E0213 15:28:52.232616 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.232790 kubelet[2829]: E0213 15:28:52.232677 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:52.232790 kubelet[2829]: E0213 15:28:52.232699 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" Feb 13 15:28:52.232790 kubelet[2829]: E0213 15:28:52.232753 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96f496cb-jtsr2_calico-apiserver(5df25a74-af5d-4f05-b3c4-95a12fc65600)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" podUID="5df25a74-af5d-4f05-b3c4-95a12fc65600" Feb 13 15:28:52.239622 containerd[1484]: time="2025-02-13T15:28:52.239478063Z" level=error msg="Failed to destroy network for sandbox \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.240129 containerd[1484]: time="2025-02-13T15:28:52.239972372Z" level=error msg="encountered an error cleaning up failed sandbox \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.240129 containerd[1484]: time="2025-02-13T15:28:52.240036096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.240810 kubelet[2829]: E0213 15:28:52.240496 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.240810 kubelet[2829]: E0213 15:28:52.240668 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:52.240810 kubelet[2829]: E0213 15:28:52.240692 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngp8v" Feb 13 15:28:52.240950 kubelet[2829]: E0213 15:28:52.240762 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ngp8v_calico-system(cc578b4f-a600-4134-9ea7-e3c0400423a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ngp8v" podUID="cc578b4f-a600-4134-9ea7-e3c0400423a8" Feb 13 15:28:52.244476 containerd[1484]: time="2025-02-13T15:28:52.244355672Z" level=error msg="Failed to destroy network for sandbox \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.245349 containerd[1484]: time="2025-02-13T15:28:52.245107797Z" level=error msg="encountered an error cleaning up failed sandbox \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.245349 containerd[1484]: time="2025-02-13T15:28:52.245178961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.246172 kubelet[2829]: E0213 15:28:52.245629 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.246172 kubelet[2829]: E0213 15:28:52.245682 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:52.246172 kubelet[2829]: E0213 15:28:52.245701 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-56fvw" Feb 13 15:28:52.246345 kubelet[2829]: E0213 15:28:52.245762 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-56fvw_kube-system(2492456e-62fc-4caa-b4e8-3b7a3936ed4e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-56fvw" podUID="2492456e-62fc-4caa-b4e8-3b7a3936ed4e" Feb 13 15:28:52.247688 containerd[1484]: time="2025-02-13T15:28:52.247646268Z" level=error msg="Failed to destroy network for sandbox \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.248107 containerd[1484]: time="2025-02-13T15:28:52.248082174Z" level=error msg="encountered an error cleaning up failed sandbox \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.248522 containerd[1484]: time="2025-02-13T15:28:52.248478477Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.248925 kubelet[2829]: E0213 15:28:52.248769 2829 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:52.248925 kubelet[2829]: E0213 15:28:52.248817 2829 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:52.248925 kubelet[2829]: E0213 15:28:52.248836 2829 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vfsrv" Feb 13 15:28:52.249094 kubelet[2829]: E0213 15:28:52.248889 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vfsrv_kube-system(0f89f8e5-fc39-4122-94aa-c93e88296236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vfsrv" podUID="0f89f8e5-fc39-4122-94aa-c93e88296236" Feb 13 15:28:52.282616 containerd[1484]: time="2025-02-13T15:28:52.282546138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:52.284768 containerd[1484]: time="2025-02-13T15:28:52.284564177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 15:28:52.286325 containerd[1484]: time="2025-02-13T15:28:52.285895376Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:52.290132 containerd[1484]: time="2025-02-13T15:28:52.289523112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:52.290578 containerd[1484]: time="2025-02-13T15:28:52.290546812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 5.632795474s" Feb 13 15:28:52.290685 containerd[1484]: time="2025-02-13T15:28:52.290669780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 15:28:52.302843 containerd[1484]: time="2025-02-13T15:28:52.302793539Z" level=info msg="CreateContainer within sandbox \"f8f7fec30bc61b38f5083bd7fa1388b2e4893e8fad107074d854aa55e4031287\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:28:52.327422 containerd[1484]: time="2025-02-13T15:28:52.327350835Z" level=info msg="CreateContainer within sandbox \"f8f7fec30bc61b38f5083bd7fa1388b2e4893e8fad107074d854aa55e4031287\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6b9df4424472527ed3c22aed72caae48c6b3aa593978cdba399deb71f54e0315\"" Feb 13 15:28:52.329623 containerd[1484]: time="2025-02-13T15:28:52.329084898Z" level=info msg="StartContainer for \"6b9df4424472527ed3c22aed72caae48c6b3aa593978cdba399deb71f54e0315\"" Feb 13 15:28:52.360524 systemd[1]: Started cri-containerd-6b9df4424472527ed3c22aed72caae48c6b3aa593978cdba399deb71f54e0315.scope - libcontainer container 6b9df4424472527ed3c22aed72caae48c6b3aa593978cdba399deb71f54e0315. Feb 13 15:28:52.403043 containerd[1484]: time="2025-02-13T15:28:52.402883636Z" level=info msg="StartContainer for \"6b9df4424472527ed3c22aed72caae48c6b3aa593978cdba399deb71f54e0315\" returns successfully" Feb 13 15:28:52.519522 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:28:52.519713 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:28:52.884463 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b-shm.mount: Deactivated successfully. Feb 13 15:28:52.884618 systemd[1]: run-netns-cni\x2d072056f5\x2d84ec\x2ddc33\x2da181\x2d237f07b04d61.mount: Deactivated successfully. Feb 13 15:28:52.884707 systemd[1]: run-netns-cni\x2d37ee6d32\x2d2248\x2d99e3\x2dbcc6\x2da68bdab1fb19.mount: Deactivated successfully. Feb 13 15:28:52.884792 systemd[1]: run-netns-cni\x2d2f9f9968\x2df747\x2d24df\x2d7fce\x2d12f17004c5aa.mount: Deactivated successfully. Feb 13 15:28:52.884881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3720494705.mount: Deactivated successfully. Feb 13 15:28:52.993937 kubelet[2829]: I0213 15:28:52.993894 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda" Feb 13 15:28:52.994964 containerd[1484]: time="2025-02-13T15:28:52.994899832Z" level=info msg="StopPodSandbox for \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\"" Feb 13 15:28:52.995114 containerd[1484]: time="2025-02-13T15:28:52.995083283Z" level=info msg="Ensure that sandbox 2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda in task-service has been cleanup successfully" Feb 13 15:28:52.998422 containerd[1484]: time="2025-02-13T15:28:52.995780764Z" level=info msg="TearDown network for sandbox \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\" successfully" Feb 13 15:28:52.998422 containerd[1484]: time="2025-02-13T15:28:52.995850488Z" level=info msg="StopPodSandbox for \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\" returns successfully" Feb 13 15:28:52.999979 containerd[1484]: time="2025-02-13T15:28:52.999626032Z" level=info msg="StopPodSandbox for \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\"" Feb 13 15:28:52.999979 containerd[1484]: time="2025-02-13T15:28:52.999724998Z" level=info msg="TearDown network for sandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\" successfully" Feb 13 15:28:52.999979 containerd[1484]: time="2025-02-13T15:28:52.999734119Z" level=info msg="StopPodSandbox for \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\" returns successfully" Feb 13 15:28:52.999766 systemd[1]: run-netns-cni\x2d617de98f\x2d448e\x2db22f\x2d9573\x2d8628e97dad59.mount: Deactivated successfully. Feb 13 15:28:53.001566 containerd[1484]: time="2025-02-13T15:28:53.001137082Z" level=info msg="StopPodSandbox for \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\"" Feb 13 15:28:53.001566 containerd[1484]: time="2025-02-13T15:28:53.001253969Z" level=info msg="TearDown network for sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\" successfully" Feb 13 15:28:53.001566 containerd[1484]: time="2025-02-13T15:28:53.001340855Z" level=info msg="StopPodSandbox for \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\" returns successfully" Feb 13 15:28:53.002451 containerd[1484]: time="2025-02-13T15:28:53.002318473Z" level=info msg="StopPodSandbox for \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\"" Feb 13 15:28:53.002623 containerd[1484]: time="2025-02-13T15:28:53.002591329Z" level=info msg="TearDown network for sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" successfully" Feb 13 15:28:53.002781 containerd[1484]: time="2025-02-13T15:28:53.002707936Z" level=info msg="StopPodSandbox for \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" returns successfully" Feb 13 15:28:53.003596 containerd[1484]: time="2025-02-13T15:28:53.003306132Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\"" Feb 13 15:28:53.003596 containerd[1484]: time="2025-02-13T15:28:53.003388697Z" level=info msg="TearDown network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" successfully" Feb 13 15:28:53.003596 containerd[1484]: time="2025-02-13T15:28:53.003397337Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" returns successfully" Feb 13 15:28:53.004106 containerd[1484]: time="2025-02-13T15:28:53.004036296Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\"" Feb 13 15:28:53.004187 containerd[1484]: time="2025-02-13T15:28:53.004121181Z" level=info msg="TearDown network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" successfully" Feb 13 15:28:53.004187 containerd[1484]: time="2025-02-13T15:28:53.004132181Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" returns successfully" Feb 13 15:28:53.004850 containerd[1484]: time="2025-02-13T15:28:53.004676814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:28:53.005992 kubelet[2829]: I0213 15:28:53.005930 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801" Feb 13 15:28:53.008581 containerd[1484]: time="2025-02-13T15:28:53.008252347Z" level=info msg="StopPodSandbox for \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\"" Feb 13 15:28:53.010150 containerd[1484]: time="2025-02-13T15:28:53.009641270Z" level=info msg="Ensure that sandbox 43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801 in task-service has been cleanup successfully" Feb 13 15:28:53.013963 containerd[1484]: time="2025-02-13T15:28:53.013796118Z" level=info msg="TearDown network for sandbox \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\" successfully" Feb 13 15:28:53.013963 containerd[1484]: time="2025-02-13T15:28:53.013834361Z" level=info msg="StopPodSandbox for \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\" returns successfully" Feb 13 15:28:53.014535 containerd[1484]: time="2025-02-13T15:28:53.014229424Z" level=info msg="StopPodSandbox for \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\"" Feb 13 15:28:53.014535 containerd[1484]: time="2025-02-13T15:28:53.014386234Z" level=info msg="TearDown network for sandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\" successfully" Feb 13 15:28:53.014535 containerd[1484]: time="2025-02-13T15:28:53.014399914Z" level=info msg="StopPodSandbox for \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\" returns successfully" Feb 13 15:28:53.015923 systemd[1]: run-netns-cni\x2de7759c37\x2d34f5\x2d71bc\x2d096f\x2db0f8786bf1a3.mount: Deactivated successfully. Feb 13 15:28:53.017798 containerd[1484]: time="2025-02-13T15:28:53.017120837Z" level=info msg="StopPodSandbox for \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\"" Feb 13 15:28:53.017798 containerd[1484]: time="2025-02-13T15:28:53.017234444Z" level=info msg="TearDown network for sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\" successfully" Feb 13 15:28:53.017798 containerd[1484]: time="2025-02-13T15:28:53.017244604Z" level=info msg="StopPodSandbox for \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\" returns successfully" Feb 13 15:28:53.018642 containerd[1484]: time="2025-02-13T15:28:53.018609326Z" level=info msg="StopPodSandbox for \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\"" Feb 13 15:28:53.018927 containerd[1484]: time="2025-02-13T15:28:53.018869061Z" level=info msg="TearDown network for sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" successfully" Feb 13 15:28:53.018927 containerd[1484]: time="2025-02-13T15:28:53.018884102Z" level=info msg="StopPodSandbox for \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" returns successfully" Feb 13 15:28:53.020211 containerd[1484]: time="2025-02-13T15:28:53.019363011Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\"" Feb 13 15:28:53.020211 containerd[1484]: time="2025-02-13T15:28:53.019523340Z" level=info msg="TearDown network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" successfully" Feb 13 15:28:53.020211 containerd[1484]: time="2025-02-13T15:28:53.019536541Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" returns successfully" Feb 13 15:28:53.020359 kubelet[2829]: I0213 15:28:53.019342 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b" Feb 13 15:28:53.021731 containerd[1484]: time="2025-02-13T15:28:53.020841219Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\"" Feb 13 15:28:53.021731 containerd[1484]: time="2025-02-13T15:28:53.020946105Z" level=info msg="TearDown network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" successfully" Feb 13 15:28:53.021731 containerd[1484]: time="2025-02-13T15:28:53.020957306Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" returns successfully" Feb 13 15:28:53.022658 containerd[1484]: time="2025-02-13T15:28:53.022618885Z" level=info msg="StopPodSandbox for \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\"" Feb 13 15:28:53.022816 containerd[1484]: time="2025-02-13T15:28:53.022794896Z" level=info msg="Ensure that sandbox a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b in task-service has been cleanup successfully" Feb 13 15:28:53.027096 containerd[1484]: time="2025-02-13T15:28:53.025412652Z" level=info msg="TearDown network for sandbox \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\" successfully" Feb 13 15:28:53.027096 containerd[1484]: time="2025-02-13T15:28:53.027094392Z" level=info msg="StopPodSandbox for \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\" returns successfully" Feb 13 15:28:53.030519 containerd[1484]: time="2025-02-13T15:28:53.025810236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:6,}" Feb 13 15:28:53.027904 systemd[1]: run-netns-cni\x2d8263eaae\x2d42fd\x2d9c8d\x2de5c1\x2d7abe1e924175.mount: Deactivated successfully. Feb 13 15:28:53.033118 containerd[1484]: time="2025-02-13T15:28:53.032633203Z" level=info msg="StopPodSandbox for \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\"" Feb 13 15:28:53.034503 containerd[1484]: time="2025-02-13T15:28:53.034461392Z" level=info msg="TearDown network for sandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\" successfully" Feb 13 15:28:53.034680 containerd[1484]: time="2025-02-13T15:28:53.034663924Z" level=info msg="StopPodSandbox for \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\" returns successfully" Feb 13 15:28:53.036847 containerd[1484]: time="2025-02-13T15:28:53.036784131Z" level=info msg="StopPodSandbox for \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\"" Feb 13 15:28:53.036952 containerd[1484]: time="2025-02-13T15:28:53.036905018Z" level=info msg="TearDown network for sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\" successfully" Feb 13 15:28:53.036952 containerd[1484]: time="2025-02-13T15:28:53.036917299Z" level=info msg="StopPodSandbox for \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\" returns successfully" Feb 13 15:28:53.039106 containerd[1484]: time="2025-02-13T15:28:53.037911398Z" level=info msg="StopPodSandbox for \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\"" Feb 13 15:28:53.039106 containerd[1484]: time="2025-02-13T15:28:53.038086609Z" level=info msg="TearDown network for sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" successfully" Feb 13 15:28:53.039106 containerd[1484]: time="2025-02-13T15:28:53.038100850Z" level=info msg="StopPodSandbox for \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" returns successfully" Feb 13 15:28:53.039106 containerd[1484]: time="2025-02-13T15:28:53.038506354Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\"" Feb 13 15:28:53.039106 containerd[1484]: time="2025-02-13T15:28:53.038590279Z" level=info msg="TearDown network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" successfully" Feb 13 15:28:53.039106 containerd[1484]: time="2025-02-13T15:28:53.038599239Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" returns successfully" Feb 13 15:28:53.039388 kubelet[2829]: I0213 15:28:53.038611 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b" Feb 13 15:28:53.039663 containerd[1484]: time="2025-02-13T15:28:53.039626341Z" level=info msg="StopPodSandbox for \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\"" Feb 13 15:28:53.039803 containerd[1484]: time="2025-02-13T15:28:53.039783110Z" level=info msg="Ensure that sandbox bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b in task-service has been cleanup successfully" Feb 13 15:28:53.042192 containerd[1484]: time="2025-02-13T15:28:53.042159892Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\"" Feb 13 15:28:53.042446 containerd[1484]: time="2025-02-13T15:28:53.042426508Z" level=info msg="TearDown network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" successfully" Feb 13 15:28:53.042507 containerd[1484]: time="2025-02-13T15:28:53.042493232Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" returns successfully" Feb 13 15:28:53.043223 containerd[1484]: time="2025-02-13T15:28:53.043196274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:6,}" Feb 13 15:28:53.045293 containerd[1484]: time="2025-02-13T15:28:53.045235676Z" level=info msg="TearDown network for sandbox \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\" successfully" Feb 13 15:28:53.046080 containerd[1484]: time="2025-02-13T15:28:53.046030123Z" level=info msg="StopPodSandbox for \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\" returns successfully" Feb 13 15:28:53.047775 containerd[1484]: time="2025-02-13T15:28:53.047737665Z" level=info msg="StopPodSandbox for \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\"" Feb 13 15:28:53.049093 containerd[1484]: time="2025-02-13T15:28:53.048913575Z" level=info msg="TearDown network for sandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\" successfully" Feb 13 15:28:53.049093 containerd[1484]: time="2025-02-13T15:28:53.049000420Z" level=info msg="StopPodSandbox for \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\" returns successfully" Feb 13 15:28:53.050799 containerd[1484]: time="2025-02-13T15:28:53.050358542Z" level=info msg="StopPodSandbox for \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\"" Feb 13 15:28:53.050799 containerd[1484]: time="2025-02-13T15:28:53.050460868Z" level=info msg="TearDown network for sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\" successfully" Feb 13 15:28:53.050799 containerd[1484]: time="2025-02-13T15:28:53.050470228Z" level=info msg="StopPodSandbox for \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\" returns successfully" Feb 13 15:28:53.055429 containerd[1484]: time="2025-02-13T15:28:53.055256834Z" level=info msg="StopPodSandbox for \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\"" Feb 13 15:28:53.055917 containerd[1484]: time="2025-02-13T15:28:53.055709341Z" level=info msg="TearDown network for sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" successfully" Feb 13 15:28:53.055917 containerd[1484]: time="2025-02-13T15:28:53.055730902Z" level=info msg="StopPodSandbox for \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" returns successfully" Feb 13 15:28:53.056644 containerd[1484]: time="2025-02-13T15:28:53.056618595Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\"" Feb 13 15:28:53.057327 containerd[1484]: time="2025-02-13T15:28:53.057086503Z" level=info msg="TearDown network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" successfully" Feb 13 15:28:53.057327 containerd[1484]: time="2025-02-13T15:28:53.057126266Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" returns successfully" Feb 13 15:28:53.059524 containerd[1484]: time="2025-02-13T15:28:53.059484566Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\"" Feb 13 15:28:53.060064 containerd[1484]: time="2025-02-13T15:28:53.059593573Z" level=info msg="TearDown network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" successfully" Feb 13 15:28:53.060064 containerd[1484]: time="2025-02-13T15:28:53.059658017Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" returns successfully" Feb 13 15:28:53.061037 containerd[1484]: time="2025-02-13T15:28:53.060996257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:6,}" Feb 13 15:28:53.068994 kubelet[2829]: I0213 15:28:53.068960 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6" Feb 13 15:28:53.071850 containerd[1484]: time="2025-02-13T15:28:53.071645533Z" level=info msg="StopPodSandbox for \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\"" Feb 13 15:28:53.071850 containerd[1484]: time="2025-02-13T15:28:53.071847065Z" level=info msg="Ensure that sandbox d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6 in task-service has been cleanup successfully" Feb 13 15:28:53.080710 containerd[1484]: time="2025-02-13T15:28:53.080520743Z" level=info msg="TearDown network for sandbox \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\" successfully" Feb 13 15:28:53.080710 containerd[1484]: time="2025-02-13T15:28:53.080553425Z" level=info msg="StopPodSandbox for \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\" returns successfully" Feb 13 15:28:53.082674 containerd[1484]: time="2025-02-13T15:28:53.082448258Z" level=info msg="StopPodSandbox for \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\"" Feb 13 15:28:53.082906 containerd[1484]: time="2025-02-13T15:28:53.082815000Z" level=info msg="TearDown network for sandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\" successfully" Feb 13 15:28:53.082906 containerd[1484]: time="2025-02-13T15:28:53.082837961Z" level=info msg="StopPodSandbox for \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\" returns successfully" Feb 13 15:28:53.083949 containerd[1484]: time="2025-02-13T15:28:53.083830940Z" level=info msg="StopPodSandbox for \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\"" Feb 13 15:28:53.084475 containerd[1484]: time="2025-02-13T15:28:53.084424576Z" level=info msg="TearDown network for sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\" successfully" Feb 13 15:28:53.084475 containerd[1484]: time="2025-02-13T15:28:53.084448537Z" level=info msg="StopPodSandbox for \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\" returns successfully" Feb 13 15:28:53.085892 containerd[1484]: time="2025-02-13T15:28:53.085685331Z" level=info msg="StopPodSandbox for \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\"" Feb 13 15:28:53.086256 containerd[1484]: time="2025-02-13T15:28:53.086081795Z" level=info msg="TearDown network for sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" successfully" Feb 13 15:28:53.086256 containerd[1484]: time="2025-02-13T15:28:53.086198242Z" level=info msg="StopPodSandbox for \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" returns successfully" Feb 13 15:28:53.088532 containerd[1484]: time="2025-02-13T15:28:53.088392813Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\"" Feb 13 15:28:53.088790 containerd[1484]: time="2025-02-13T15:28:53.088747114Z" level=info msg="TearDown network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" successfully" Feb 13 15:28:53.088790 containerd[1484]: time="2025-02-13T15:28:53.088766635Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" returns successfully" Feb 13 15:28:53.091251 kubelet[2829]: I0213 15:28:53.090513 2829 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7" Feb 13 15:28:53.093229 containerd[1484]: time="2025-02-13T15:28:53.093119775Z" level=info msg="StopPodSandbox for \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\"" Feb 13 15:28:53.093229 containerd[1484]: time="2025-02-13T15:28:53.093169218Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\"" Feb 13 15:28:53.093594 containerd[1484]: time="2025-02-13T15:28:53.093377150Z" level=info msg="TearDown network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" successfully" Feb 13 15:28:53.093594 containerd[1484]: time="2025-02-13T15:28:53.093402152Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" returns successfully" Feb 13 15:28:53.098136 kubelet[2829]: I0213 15:28:53.098053 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-rg4mg" podStartSLOduration=1.710849759 podStartE2EDuration="16.098002467s" podCreationTimestamp="2025-02-13 15:28:37 +0000 UTC" firstStartedPulling="2025-02-13 15:28:37.903862092 +0000 UTC m=+23.557485523" lastFinishedPulling="2025-02-13 15:28:52.2910148 +0000 UTC m=+37.944638231" observedRunningTime="2025-02-13 15:28:53.092147557 +0000 UTC m=+38.745771028" watchObservedRunningTime="2025-02-13 15:28:53.098002467 +0000 UTC m=+38.751625898" Feb 13 15:28:53.100037 containerd[1484]: time="2025-02-13T15:28:53.094516458Z" level=info msg="Ensure that sandbox 0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7 in task-service has been cleanup successfully" Feb 13 15:28:53.100365 containerd[1484]: time="2025-02-13T15:28:53.100315565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:6,}" Feb 13 15:28:53.100501 containerd[1484]: time="2025-02-13T15:28:53.100441092Z" level=info msg="TearDown network for sandbox \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\" successfully" Feb 13 15:28:53.100818 containerd[1484]: time="2025-02-13T15:28:53.100765952Z" level=info msg="StopPodSandbox for \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\" returns successfully" Feb 13 15:28:53.103731 containerd[1484]: time="2025-02-13T15:28:53.103686326Z" level=info msg="StopPodSandbox for \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\"" Feb 13 15:28:53.106399 containerd[1484]: time="2025-02-13T15:28:53.106358685Z" level=info msg="TearDown network for sandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\" successfully" Feb 13 15:28:53.106632 containerd[1484]: time="2025-02-13T15:28:53.106613901Z" level=info msg="StopPodSandbox for \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\" returns successfully" Feb 13 15:28:53.107633 containerd[1484]: time="2025-02-13T15:28:53.107605280Z" level=info msg="StopPodSandbox for \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\"" Feb 13 15:28:53.108063 containerd[1484]: time="2025-02-13T15:28:53.107937780Z" level=info msg="TearDown network for sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\" successfully" Feb 13 15:28:53.108063 containerd[1484]: time="2025-02-13T15:28:53.107955941Z" level=info msg="StopPodSandbox for \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\" returns successfully" Feb 13 15:28:53.109602 containerd[1484]: time="2025-02-13T15:28:53.109257579Z" level=info msg="StopPodSandbox for \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\"" Feb 13 15:28:53.112070 containerd[1484]: time="2025-02-13T15:28:53.110850634Z" level=info msg="TearDown network for sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" successfully" Feb 13 15:28:53.112070 containerd[1484]: time="2025-02-13T15:28:53.111966100Z" level=info msg="StopPodSandbox for \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" returns successfully" Feb 13 15:28:53.113941 containerd[1484]: time="2025-02-13T15:28:53.113784809Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\"" Feb 13 15:28:53.113941 containerd[1484]: time="2025-02-13T15:28:53.113900096Z" level=info msg="TearDown network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" successfully" Feb 13 15:28:53.113941 containerd[1484]: time="2025-02-13T15:28:53.113910736Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" returns successfully" Feb 13 15:28:53.116458 containerd[1484]: time="2025-02-13T15:28:53.116220194Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\"" Feb 13 15:28:53.116676 containerd[1484]: time="2025-02-13T15:28:53.116650260Z" level=info msg="TearDown network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" successfully" Feb 13 15:28:53.116761 containerd[1484]: time="2025-02-13T15:28:53.116744786Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" returns successfully" Feb 13 15:28:53.118534 containerd[1484]: time="2025-02-13T15:28:53.118179271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:6,}" Feb 13 15:28:53.523849 systemd-networkd[1386]: cali582fb917e16: Link UP Feb 13 15:28:53.524061 systemd-networkd[1386]: cali582fb917e16: Gained carrier Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.181 [INFO][4722] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.230 [INFO][4722] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0 calico-apiserver-96f496cb- calico-apiserver 7c98ace8-be26-494a-8fb2-3660c969f424 684 0 2025-02-13 15:28:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:96f496cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152-2-1-1-73ff0440f7 calico-apiserver-96f496cb-flt7v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali582fb917e16 [] []}} ContainerID="b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-flt7v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.235 [INFO][4722] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-flt7v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.409 [INFO][4781] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" HandleID="k8s-pod-network.b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" Workload="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.447 [INFO][4781] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" HandleID="k8s-pod-network.b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" Workload="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000313200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152-2-1-1-73ff0440f7", "pod":"calico-apiserver-96f496cb-flt7v", "timestamp":"2025-02-13 15:28:53.408983036 +0000 UTC"}, Hostname:"ci-4152-2-1-1-73ff0440f7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.447 [INFO][4781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.447 [INFO][4781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.447 [INFO][4781] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-1-1-73ff0440f7' Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.453 [INFO][4781] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.465 [INFO][4781] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.474 [INFO][4781] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.477 [INFO][4781] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.482 [INFO][4781] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.482 [INFO][4781] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.486 [INFO][4781] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313 Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.496 [INFO][4781] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.505 [INFO][4781] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.65/26] block=192.168.75.64/26 handle="k8s-pod-network.b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.505 [INFO][4781] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.65/26] handle="k8s-pod-network.b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.505 [INFO][4781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:53.548569 containerd[1484]: 2025-02-13 15:28:53.505 [INFO][4781] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.65/26] IPv6=[] ContainerID="b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" HandleID="k8s-pod-network.b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" Workload="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0" Feb 13 15:28:53.549384 containerd[1484]: 2025-02-13 15:28:53.510 [INFO][4722] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-flt7v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0", GenerateName:"calico-apiserver-96f496cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c98ace8-be26-494a-8fb2-3660c969f424", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96f496cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"", Pod:"calico-apiserver-96f496cb-flt7v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali582fb917e16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:53.549384 containerd[1484]: 2025-02-13 15:28:53.511 [INFO][4722] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.65/32] ContainerID="b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-flt7v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0" Feb 13 15:28:53.549384 containerd[1484]: 2025-02-13 15:28:53.511 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali582fb917e16 ContainerID="b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-flt7v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0" Feb 13 15:28:53.549384 containerd[1484]: 2025-02-13 15:28:53.525 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-flt7v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0" Feb 13 15:28:53.549384 containerd[1484]: 2025-02-13 15:28:53.526 [INFO][4722] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-flt7v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0", GenerateName:"calico-apiserver-96f496cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c98ace8-be26-494a-8fb2-3660c969f424", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96f496cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313", Pod:"calico-apiserver-96f496cb-flt7v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali582fb917e16", MAC:"1e:01:61:4e:8c:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:53.549384 containerd[1484]: 2025-02-13 15:28:53.542 [INFO][4722] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-flt7v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--flt7v-eth0" Feb 13 15:28:53.581208 systemd-networkd[1386]: cali68deefb8d87: Link UP Feb 13 15:28:53.582366 systemd-networkd[1386]: cali68deefb8d87: Gained carrier Feb 13 15:28:53.590262 containerd[1484]: time="2025-02-13T15:28:53.590135453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:53.591590 containerd[1484]: time="2025-02-13T15:28:53.591225958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:53.592002 containerd[1484]: time="2025-02-13T15:28:53.591768431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:53.595641 containerd[1484]: time="2025-02-13T15:28:53.595542576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.137 [INFO][4713] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.178 [INFO][4713] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0 coredns-76f75df574- kube-system 0f89f8e5-fc39-4122-94aa-c93e88296236 683 0 2025-02-13 15:28:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152-2-1-1-73ff0440f7 coredns-76f75df574-vfsrv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68deefb8d87 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" Namespace="kube-system" Pod="coredns-76f75df574-vfsrv" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.178 [INFO][4713] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" Namespace="kube-system" Pod="coredns-76f75df574-vfsrv" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.427 [INFO][4773] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" HandleID="k8s-pod-network.eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" Workload="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.463 [INFO][4773] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" HandleID="k8s-pod-network.eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" Workload="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8820), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152-2-1-1-73ff0440f7", "pod":"coredns-76f75df574-vfsrv", "timestamp":"2025-02-13 15:28:53.427838642 +0000 UTC"}, Hostname:"ci-4152-2-1-1-73ff0440f7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.463 [INFO][4773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.505 [INFO][4773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.506 [INFO][4773] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-1-1-73ff0440f7' Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.512 [INFO][4773] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.523 [INFO][4773] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.532 [INFO][4773] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.536 [INFO][4773] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.541 [INFO][4773] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.541 [INFO][4773] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.545 [INFO][4773] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491 Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.558 [INFO][4773] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.571 [INFO][4773] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.66/26] block=192.168.75.64/26 handle="k8s-pod-network.eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.572 [INFO][4773] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.66/26] handle="k8s-pod-network.eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.572 [INFO][4773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:53.616014 containerd[1484]: 2025-02-13 15:28:53.572 [INFO][4773] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.66/26] IPv6=[] ContainerID="eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" HandleID="k8s-pod-network.eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" Workload="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0" Feb 13 15:28:53.616744 containerd[1484]: 2025-02-13 15:28:53.576 [INFO][4713] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" Namespace="kube-system" Pod="coredns-76f75df574-vfsrv" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0f89f8e5-fc39-4122-94aa-c93e88296236", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"", Pod:"coredns-76f75df574-vfsrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68deefb8d87", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:53.616744 containerd[1484]: 2025-02-13 15:28:53.576 [INFO][4713] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.66/32] ContainerID="eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" Namespace="kube-system" Pod="coredns-76f75df574-vfsrv" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0" Feb 13 15:28:53.616744 containerd[1484]: 2025-02-13 15:28:53.576 [INFO][4713] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68deefb8d87 ContainerID="eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" Namespace="kube-system" Pod="coredns-76f75df574-vfsrv" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0" Feb 13 15:28:53.616744 containerd[1484]: 2025-02-13 15:28:53.582 [INFO][4713] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" Namespace="kube-system" Pod="coredns-76f75df574-vfsrv" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0" Feb 13 15:28:53.616744 containerd[1484]: 2025-02-13 15:28:53.583 [INFO][4713] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" Namespace="kube-system" Pod="coredns-76f75df574-vfsrv" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0f89f8e5-fc39-4122-94aa-c93e88296236", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491", Pod:"coredns-76f75df574-vfsrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68deefb8d87", MAC:"ea:59:4e:f0:29:43", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:53.616744 containerd[1484]: 2025-02-13 15:28:53.612 [INFO][4713] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491" Namespace="kube-system" Pod="coredns-76f75df574-vfsrv" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--vfsrv-eth0" Feb 13 15:28:53.632519 systemd[1]: Started cri-containerd-b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313.scope - libcontainer container b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313. Feb 13 15:28:53.661171 systemd-networkd[1386]: cali4d0cbaab247: Link UP Feb 13 15:28:53.661901 systemd-networkd[1386]: cali4d0cbaab247: Gained carrier Feb 13 15:28:53.666569 containerd[1484]: time="2025-02-13T15:28:53.665316343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:53.666569 containerd[1484]: time="2025-02-13T15:28:53.666465571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:53.666569 containerd[1484]: time="2025-02-13T15:28:53.666488372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:53.667026 containerd[1484]: time="2025-02-13T15:28:53.666992483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:53.713521 systemd[1]: Started cri-containerd-eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491.scope - libcontainer container eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491. Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.119 [INFO][4704] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.172 [INFO][4704] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0 calico-apiserver-96f496cb- calico-apiserver 5df25a74-af5d-4f05-b3c4-95a12fc65600 681 0 2025-02-13 15:28:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:96f496cb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152-2-1-1-73ff0440f7 calico-apiserver-96f496cb-jtsr2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4d0cbaab247 [] []}} ContainerID="bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-jtsr2" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.172 [INFO][4704] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-jtsr2" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.414 [INFO][4768] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" HandleID="k8s-pod-network.bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" Workload="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.466 [INFO][4768] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" HandleID="k8s-pod-network.bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" Workload="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000333550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152-2-1-1-73ff0440f7", "pod":"calico-apiserver-96f496cb-jtsr2", "timestamp":"2025-02-13 15:28:53.413582471 +0000 UTC"}, Hostname:"ci-4152-2-1-1-73ff0440f7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.466 [INFO][4768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.572 [INFO][4768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.572 [INFO][4768] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-1-1-73ff0440f7' Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.579 [INFO][4768] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.592 [INFO][4768] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.605 [INFO][4768] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.611 [INFO][4768] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.617 [INFO][4768] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.618 [INFO][4768] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.622 [INFO][4768] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.634 [INFO][4768] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.644 [INFO][4768] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.67/26] block=192.168.75.64/26 handle="k8s-pod-network.bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.645 [INFO][4768] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.67/26] handle="k8s-pod-network.bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.646 [INFO][4768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:53.726529 containerd[1484]: 2025-02-13 15:28:53.646 [INFO][4768] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.67/26] IPv6=[] ContainerID="bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" HandleID="k8s-pod-network.bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" Workload="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0" Feb 13 15:28:53.727588 containerd[1484]: 2025-02-13 15:28:53.653 [INFO][4704] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-jtsr2" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0", GenerateName:"calico-apiserver-96f496cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5df25a74-af5d-4f05-b3c4-95a12fc65600", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96f496cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"", Pod:"calico-apiserver-96f496cb-jtsr2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d0cbaab247", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:53.727588 containerd[1484]: 2025-02-13 15:28:53.653 [INFO][4704] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.67/32] ContainerID="bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-jtsr2" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0" Feb 13 15:28:53.727588 containerd[1484]: 2025-02-13 15:28:53.654 [INFO][4704] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d0cbaab247 ContainerID="bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-jtsr2" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0" Feb 13 15:28:53.727588 containerd[1484]: 2025-02-13 15:28:53.661 [INFO][4704] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-jtsr2" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0" Feb 13 15:28:53.727588 containerd[1484]: 2025-02-13 15:28:53.682 [INFO][4704] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-jtsr2" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0", GenerateName:"calico-apiserver-96f496cb-", Namespace:"calico-apiserver", SelfLink:"", UID:"5df25a74-af5d-4f05-b3c4-95a12fc65600", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96f496cb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b", Pod:"calico-apiserver-96f496cb-jtsr2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4d0cbaab247", MAC:"f6:9e:59:55:24:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:53.727588 containerd[1484]: 2025-02-13 15:28:53.705 [INFO][4704] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b" Namespace="calico-apiserver" Pod="calico-apiserver-96f496cb-jtsr2" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--apiserver--96f496cb--jtsr2-eth0" Feb 13 15:28:53.782934 systemd-networkd[1386]: cali5d1d8cf7a42: Link UP Feb 13 15:28:53.785768 systemd-networkd[1386]: cali5d1d8cf7a42: Gained carrier Feb 13 15:28:53.827997 containerd[1484]: time="2025-02-13T15:28:53.827625714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-flt7v,Uid:7c98ace8-be26-494a-8fb2-3660c969f424,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313\"" Feb 13 15:28:53.863620 systemd-networkd[1386]: califb15f5d76fc: Link UP Feb 13 15:28:53.865411 systemd-networkd[1386]: califb15f5d76fc: Gained carrier Feb 13 15:28:53.886952 systemd[1]: run-netns-cni\x2db525d2bb\x2d6cc2\x2df191\x2da80f\x2de972c9b41fef.mount: Deactivated successfully. Feb 13 15:28:53.887059 systemd[1]: run-netns-cni\x2d52bcd2e3\x2df91d\x2dac79\x2d5c4b\x2dd17ab51c69b1.mount: Deactivated successfully. Feb 13 15:28:53.887114 systemd[1]: run-netns-cni\x2d8a377fab\x2d1fc6\x2dfd34\x2d8f85\x2d9af3f092f454.mount: Deactivated successfully. Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.271 [INFO][4747] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.326 [INFO][4747] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0 coredns-76f75df574- kube-system 2492456e-62fc-4caa-b4e8-3b7a3936ed4e 677 0 2025-02-13 15:28:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152-2-1-1-73ff0440f7 coredns-76f75df574-56fvw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5d1d8cf7a42 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" Namespace="kube-system" Pod="coredns-76f75df574-56fvw" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.329 [INFO][4747] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" Namespace="kube-system" Pod="coredns-76f75df574-56fvw" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.442 [INFO][4791] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" HandleID="k8s-pod-network.c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" Workload="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.469 [INFO][4791] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" HandleID="k8s-pod-network.c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" Workload="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003990d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152-2-1-1-73ff0440f7", "pod":"coredns-76f75df574-56fvw", "timestamp":"2025-02-13 15:28:53.442257383 +0000 UTC"}, Hostname:"ci-4152-2-1-1-73ff0440f7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.469 [INFO][4791] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.645 [INFO][4791] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.645 [INFO][4791] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-1-1-73ff0440f7' Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.651 [INFO][4791] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.675 [INFO][4791] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.688 [INFO][4791] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.693 [INFO][4791] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.699 [INFO][4791] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.700 [INFO][4791] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.712 [INFO][4791] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1 Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.737 [INFO][4791] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.760 [INFO][4791] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.68/26] block=192.168.75.64/26 handle="k8s-pod-network.c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.760 [INFO][4791] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.68/26] handle="k8s-pod-network.c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.760 [INFO][4791] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:53.911731 containerd[1484]: 2025-02-13 15:28:53.760 [INFO][4791] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.68/26] IPv6=[] ContainerID="c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" HandleID="k8s-pod-network.c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" Workload="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0" Feb 13 15:28:53.912287 containerd[1484]: 2025-02-13 15:28:53.763 [INFO][4747] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" Namespace="kube-system" Pod="coredns-76f75df574-56fvw" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2492456e-62fc-4caa-b4e8-3b7a3936ed4e", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"", Pod:"coredns-76f75df574-56fvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d1d8cf7a42", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:53.912287 containerd[1484]: 2025-02-13 15:28:53.764 [INFO][4747] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.68/32] ContainerID="c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" Namespace="kube-system" Pod="coredns-76f75df574-56fvw" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0" Feb 13 15:28:53.912287 containerd[1484]: 2025-02-13 15:28:53.764 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d1d8cf7a42 ContainerID="c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" Namespace="kube-system" Pod="coredns-76f75df574-56fvw" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0" Feb 13 15:28:53.912287 containerd[1484]: 2025-02-13 15:28:53.796 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" Namespace="kube-system" Pod="coredns-76f75df574-56fvw" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0" Feb 13 15:28:53.912287 containerd[1484]: 2025-02-13 15:28:53.797 [INFO][4747] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" Namespace="kube-system" Pod="coredns-76f75df574-56fvw" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2492456e-62fc-4caa-b4e8-3b7a3936ed4e", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1", Pod:"coredns-76f75df574-56fvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5d1d8cf7a42", MAC:"7a:eb:16:d5:51:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:53.912287 containerd[1484]: 2025-02-13 15:28:53.811 [INFO][4747] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1" Namespace="kube-system" Pod="coredns-76f75df574-56fvw" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-coredns--76f75df574--56fvw-eth0" Feb 13 15:28:53.933122 containerd[1484]: time="2025-02-13T15:28:53.933084252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:28:53.937719 containerd[1484]: time="2025-02-13T15:28:53.936655825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:53.937719 containerd[1484]: time="2025-02-13T15:28:53.936752591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:53.937719 containerd[1484]: time="2025-02-13T15:28:53.936771232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:53.937719 containerd[1484]: time="2025-02-13T15:28:53.936904960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.259 [INFO][4734] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.305 [INFO][4734] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0 calico-kube-controllers-5dfcc87966- calico-system 0f9f8988-12d5-4588-beb7-07ec325b3215 682 0 2025-02-13 15:28:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5dfcc87966 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4152-2-1-1-73ff0440f7 calico-kube-controllers-5dfcc87966-z4lb7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califb15f5d76fc [] []}} ContainerID="4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" Namespace="calico-system" Pod="calico-kube-controllers-5dfcc87966-z4lb7" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.306 [INFO][4734] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" Namespace="calico-system" Pod="calico-kube-controllers-5dfcc87966-z4lb7" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.453 [INFO][4787] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" HandleID="k8s-pod-network.4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" Workload="ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.481 [INFO][4787] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" HandleID="k8s-pod-network.4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" Workload="ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dc00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152-2-1-1-73ff0440f7", "pod":"calico-kube-controllers-5dfcc87966-z4lb7", "timestamp":"2025-02-13 15:28:53.453853315 +0000 UTC"}, Hostname:"ci-4152-2-1-1-73ff0440f7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.481 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.762 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.763 [INFO][4787] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-1-1-73ff0440f7' Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.771 [INFO][4787] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.795 [INFO][4787] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.811 [INFO][4787] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.819 [INFO][4787] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.823 [INFO][4787] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.823 [INFO][4787] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.832 [INFO][4787] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3 Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.842 [INFO][4787] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.853 [INFO][4787] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.69/26] block=192.168.75.64/26 handle="k8s-pod-network.4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.853 [INFO][4787] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.69/26] handle="k8s-pod-network.4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.853 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:53.942013 containerd[1484]: 2025-02-13 15:28:53.853 [INFO][4787] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.69/26] IPv6=[] ContainerID="4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" HandleID="k8s-pod-network.4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" Workload="ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0" Feb 13 15:28:53.942691 containerd[1484]: 2025-02-13 15:28:53.858 [INFO][4734] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" Namespace="calico-system" Pod="calico-kube-controllers-5dfcc87966-z4lb7" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0", GenerateName:"calico-kube-controllers-5dfcc87966-", Namespace:"calico-system", SelfLink:"", UID:"0f9f8988-12d5-4588-beb7-07ec325b3215", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dfcc87966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"", Pod:"calico-kube-controllers-5dfcc87966-z4lb7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb15f5d76fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:53.942691 containerd[1484]: 2025-02-13 15:28:53.858 [INFO][4734] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.69/32] ContainerID="4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" Namespace="calico-system" Pod="calico-kube-controllers-5dfcc87966-z4lb7" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0" Feb 13 15:28:53.942691 containerd[1484]: 2025-02-13 15:28:53.858 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb15f5d76fc ContainerID="4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" Namespace="calico-system" Pod="calico-kube-controllers-5dfcc87966-z4lb7" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0" Feb 13 15:28:53.942691 containerd[1484]: 2025-02-13 15:28:53.867 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" Namespace="calico-system" Pod="calico-kube-controllers-5dfcc87966-z4lb7" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0" Feb 13 15:28:53.942691 containerd[1484]: 2025-02-13 15:28:53.868 [INFO][4734] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" Namespace="calico-system" Pod="calico-kube-controllers-5dfcc87966-z4lb7" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0", GenerateName:"calico-kube-controllers-5dfcc87966-", Namespace:"calico-system", SelfLink:"", UID:"0f9f8988-12d5-4588-beb7-07ec325b3215", ResourceVersion:"682", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dfcc87966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3", Pod:"calico-kube-controllers-5dfcc87966-z4lb7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califb15f5d76fc", MAC:"46:02:75:96:b9:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:53.942691 containerd[1484]: 2025-02-13 15:28:53.922 [INFO][4734] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3" Namespace="calico-system" Pod="calico-kube-controllers-5dfcc87966-z4lb7" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-calico--kube--controllers--5dfcc87966--z4lb7-eth0" Feb 13 15:28:53.978278 systemd-networkd[1386]: calibef743d1694: Link UP Feb 13 15:28:53.981875 systemd-networkd[1386]: calibef743d1694: Gained carrier Feb 13 15:28:53.983379 systemd[1]: Started cri-containerd-bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b.scope - libcontainer container bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b. Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.277 [INFO][4754] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.349 [INFO][4754] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0 csi-node-driver- calico-system cc578b4f-a600-4134-9ea7-e3c0400423a8 595 0 2025-02-13 15:28:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4152-2-1-1-73ff0440f7 csi-node-driver-ngp8v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibef743d1694 [] []}} ContainerID="00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" Namespace="calico-system" Pod="csi-node-driver-ngp8v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.350 [INFO][4754] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" Namespace="calico-system" Pod="csi-node-driver-ngp8v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.484 [INFO][4797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" HandleID="k8s-pod-network.00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" Workload="ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.501 [INFO][4797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" HandleID="k8s-pod-network.00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" Workload="ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002907f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152-2-1-1-73ff0440f7", "pod":"csi-node-driver-ngp8v", "timestamp":"2025-02-13 15:28:53.484342016 +0000 UTC"}, Hostname:"ci-4152-2-1-1-73ff0440f7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.501 [INFO][4797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.853 [INFO][4797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.854 [INFO][4797] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-1-1-73ff0440f7' Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.857 [INFO][4797] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.875 [INFO][4797] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.899 [INFO][4797] ipam/ipam.go 489: Trying affinity for 192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.904 [INFO][4797] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.912 [INFO][4797] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.64/26 host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.913 [INFO][4797] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.64/26 handle="k8s-pod-network.00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.921 [INFO][4797] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9 Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.935 [INFO][4797] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.64/26 handle="k8s-pod-network.00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.958 [INFO][4797] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.70/26] block=192.168.75.64/26 handle="k8s-pod-network.00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.958 [INFO][4797] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.70/26] handle="k8s-pod-network.00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" host="ci-4152-2-1-1-73ff0440f7" Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.958 [INFO][4797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:54.005472 containerd[1484]: 2025-02-13 15:28:53.958 [INFO][4797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.70/26] IPv6=[] ContainerID="00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" HandleID="k8s-pod-network.00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" Workload="ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0" Feb 13 15:28:54.006027 containerd[1484]: 2025-02-13 15:28:53.969 [INFO][4754] cni-plugin/k8s.go 386: Populated endpoint ContainerID="00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" Namespace="calico-system" Pod="csi-node-driver-ngp8v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc578b4f-a600-4134-9ea7-e3c0400423a8", ResourceVersion:"595", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"", Pod:"csi-node-driver-ngp8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibef743d1694", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:54.006027 containerd[1484]: 2025-02-13 15:28:53.971 [INFO][4754] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.70/32] ContainerID="00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" Namespace="calico-system" Pod="csi-node-driver-ngp8v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0" Feb 13 15:28:54.006027 containerd[1484]: 2025-02-13 15:28:53.971 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibef743d1694 ContainerID="00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" Namespace="calico-system" Pod="csi-node-driver-ngp8v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0" Feb 13 15:28:54.006027 containerd[1484]: 2025-02-13 15:28:53.981 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" Namespace="calico-system" Pod="csi-node-driver-ngp8v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0" Feb 13 15:28:54.006027 containerd[1484]: 2025-02-13 15:28:53.984 [INFO][4754] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" Namespace="calico-system" Pod="csi-node-driver-ngp8v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc578b4f-a600-4134-9ea7-e3c0400423a8", ResourceVersion:"595", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-1-1-73ff0440f7", ContainerID:"00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9", Pod:"csi-node-driver-ngp8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibef743d1694", MAC:"9a:70:79:d7:62:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:54.006027 containerd[1484]: 2025-02-13 15:28:54.002 [INFO][4754] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9" Namespace="calico-system" Pod="csi-node-driver-ngp8v" WorkloadEndpoint="ci--4152--2--1--1--73ff0440f7-k8s-csi--node--driver--ngp8v-eth0" Feb 13 15:28:54.020127 containerd[1484]: time="2025-02-13T15:28:54.018536161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vfsrv,Uid:0f89f8e5-fc39-4122-94aa-c93e88296236,Namespace:kube-system,Attempt:6,} returns sandbox id \"eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491\"" Feb 13 15:28:54.026584 containerd[1484]: time="2025-02-13T15:28:54.026421795Z" level=info msg="CreateContainer within sandbox \"eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:28:54.056208 containerd[1484]: time="2025-02-13T15:28:54.054147461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:54.056208 containerd[1484]: time="2025-02-13T15:28:54.054241467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:54.060755 containerd[1484]: time="2025-02-13T15:28:54.060460200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:54.060922 containerd[1484]: time="2025-02-13T15:28:54.060701535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:54.067879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2896830770.mount: Deactivated successfully. Feb 13 15:28:54.072371 containerd[1484]: time="2025-02-13T15:28:54.071892848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:54.072371 containerd[1484]: time="2025-02-13T15:28:54.071962852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:54.072371 containerd[1484]: time="2025-02-13T15:28:54.071978013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:54.072371 containerd[1484]: time="2025-02-13T15:28:54.072081619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:54.081261 containerd[1484]: time="2025-02-13T15:28:54.080923670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:54.083898 containerd[1484]: time="2025-02-13T15:28:54.081025436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:54.083898 containerd[1484]: time="2025-02-13T15:28:54.083817684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:54.084563 containerd[1484]: time="2025-02-13T15:28:54.084485324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:54.105576 containerd[1484]: time="2025-02-13T15:28:54.104496327Z" level=info msg="CreateContainer within sandbox \"eda78e6a475e34adbb41f3408b08c807da140e474c3b42d6dcba5f182200c491\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4e847a4f4d7d683f20256b56e3022ce570ad152597cd7763c0ca319e7271cab6\"" Feb 13 15:28:54.106858 containerd[1484]: time="2025-02-13T15:28:54.106822827Z" level=info msg="StartContainer for \"4e847a4f4d7d683f20256b56e3022ce570ad152597cd7763c0ca319e7271cab6\"" Feb 13 15:28:54.195539 systemd[1]: Started cri-containerd-4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3.scope - libcontainer container 4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3. Feb 13 15:28:54.218765 systemd[1]: Started cri-containerd-00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9.scope - libcontainer container 00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9. Feb 13 15:28:54.224708 systemd[1]: Started cri-containerd-4e847a4f4d7d683f20256b56e3022ce570ad152597cd7763c0ca319e7271cab6.scope - libcontainer container 4e847a4f4d7d683f20256b56e3022ce570ad152597cd7763c0ca319e7271cab6. Feb 13 15:28:54.230372 systemd[1]: Started cri-containerd-c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1.scope - libcontainer container c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1. Feb 13 15:28:54.242742 containerd[1484]: time="2025-02-13T15:28:54.240970249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96f496cb-jtsr2,Uid:5df25a74-af5d-4f05-b3c4-95a12fc65600,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b\"" Feb 13 15:28:54.312612 containerd[1484]: time="2025-02-13T15:28:54.311971796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-56fvw,Uid:2492456e-62fc-4caa-b4e8-3b7a3936ed4e,Namespace:kube-system,Attempt:6,} returns sandbox id \"c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1\"" Feb 13 15:28:54.319492 containerd[1484]: time="2025-02-13T15:28:54.319186309Z" level=info msg="CreateContainer within sandbox \"c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:28:54.354880 containerd[1484]: time="2025-02-13T15:28:54.354525993Z" level=info msg="CreateContainer within sandbox \"c992c1e06ce191bf10c1e661b575218aa3237c0ca7d3e44522977c62d46301d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc041b6ef0d5b58d0025d7669c30e821b3e62bda5eb5e0e5ef997f95d7fc3b2e\"" Feb 13 15:28:54.356002 containerd[1484]: time="2025-02-13T15:28:54.355967480Z" level=info msg="StartContainer for \"fc041b6ef0d5b58d0025d7669c30e821b3e62bda5eb5e0e5ef997f95d7fc3b2e\"" Feb 13 15:28:54.365113 containerd[1484]: time="2025-02-13T15:28:54.364739807Z" level=info msg="StartContainer for \"4e847a4f4d7d683f20256b56e3022ce570ad152597cd7763c0ca319e7271cab6\" returns successfully" Feb 13 15:28:54.444636 systemd[1]: Started cri-containerd-fc041b6ef0d5b58d0025d7669c30e821b3e62bda5eb5e0e5ef997f95d7fc3b2e.scope - libcontainer container fc041b6ef0d5b58d0025d7669c30e821b3e62bda5eb5e0e5ef997f95d7fc3b2e. Feb 13 15:28:54.451070 containerd[1484]: time="2025-02-13T15:28:54.450940187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngp8v,Uid:cc578b4f-a600-4134-9ea7-e3c0400423a8,Namespace:calico-system,Attempt:6,} returns sandbox id \"00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9\"" Feb 13 15:28:54.555820 containerd[1484]: time="2025-02-13T15:28:54.555769687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dfcc87966-z4lb7,Uid:0f9f8988-12d5-4588-beb7-07ec325b3215,Namespace:calico-system,Attempt:6,} returns sandbox id \"4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3\"" Feb 13 15:28:54.578106 containerd[1484]: time="2025-02-13T15:28:54.577081168Z" level=info msg="StartContainer for \"fc041b6ef0d5b58d0025d7669c30e821b3e62bda5eb5e0e5ef997f95d7fc3b2e\" returns successfully" Feb 13 15:28:54.912533 systemd-networkd[1386]: cali582fb917e16: Gained IPv6LL Feb 13 15:28:54.925476 kernel: bpftool[5374]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:28:55.148965 systemd-networkd[1386]: vxlan.calico: Link UP Feb 13 15:28:55.148974 systemd-networkd[1386]: vxlan.calico: Gained carrier Feb 13 15:28:55.239976 kubelet[2829]: I0213 15:28:55.239857 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-56fvw" podStartSLOduration=26.239818925 podStartE2EDuration="26.239818925s" podCreationTimestamp="2025-02-13 15:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:55.239471224 +0000 UTC m=+40.893094655" watchObservedRunningTime="2025-02-13 15:28:55.239818925 +0000 UTC m=+40.893442356" Feb 13 15:28:55.298046 systemd-networkd[1386]: cali5d1d8cf7a42: Gained IPv6LL Feb 13 15:28:55.488580 systemd-networkd[1386]: califb15f5d76fc: Gained IPv6LL Feb 13 15:28:55.552767 systemd-networkd[1386]: cali68deefb8d87: Gained IPv6LL Feb 13 15:28:55.616705 systemd-networkd[1386]: cali4d0cbaab247: Gained IPv6LL Feb 13 15:28:55.872786 systemd-networkd[1386]: calibef743d1694: Gained IPv6LL Feb 13 15:28:56.256611 kubelet[2829]: I0213 15:28:56.256480 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vfsrv" podStartSLOduration=27.256426771 podStartE2EDuration="27.256426771s" podCreationTimestamp="2025-02-13 15:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:55.267985349 +0000 UTC m=+40.921608780" watchObservedRunningTime="2025-02-13 15:28:56.256426771 +0000 UTC m=+41.910050242" Feb 13 15:28:56.320518 systemd-networkd[1386]: vxlan.calico: Gained IPv6LL Feb 13 15:28:56.682147 containerd[1484]: time="2025-02-13T15:28:56.681007918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:56.682147 containerd[1484]: time="2025-02-13T15:28:56.682093984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Feb 13 15:28:56.682747 containerd[1484]: time="2025-02-13T15:28:56.682715582Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:56.685973 containerd[1484]: time="2025-02-13T15:28:56.685945498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:56.686670 containerd[1484]: time="2025-02-13T15:28:56.686633500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.75324095s" Feb 13 15:28:56.686670 containerd[1484]: time="2025-02-13T15:28:56.686668462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:28:56.690212 containerd[1484]: time="2025-02-13T15:28:56.690179156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 15:28:56.690389 containerd[1484]: time="2025-02-13T15:28:56.690364607Z" level=info msg="CreateContainer within sandbox \"b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:28:56.715149 containerd[1484]: time="2025-02-13T15:28:56.715059789Z" level=info msg="CreateContainer within sandbox \"b632cbc90e50c2394f36df257ffd70490c826011dfdb3b125c873157044a4313\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4fa090b4cab150ae800f0f157f8d79b0069ab89594910014acb0ac5cc68c0371\"" Feb 13 15:28:56.716806 containerd[1484]: time="2025-02-13T15:28:56.716759933Z" level=info msg="StartContainer for \"4fa090b4cab150ae800f0f157f8d79b0069ab89594910014acb0ac5cc68c0371\"" Feb 13 15:28:56.757478 systemd[1]: Started cri-containerd-4fa090b4cab150ae800f0f157f8d79b0069ab89594910014acb0ac5cc68c0371.scope - libcontainer container 4fa090b4cab150ae800f0f157f8d79b0069ab89594910014acb0ac5cc68c0371. Feb 13 15:28:56.799233 containerd[1484]: time="2025-02-13T15:28:56.799185707Z" level=info msg="StartContainer for \"4fa090b4cab150ae800f0f157f8d79b0069ab89594910014acb0ac5cc68c0371\" returns successfully" Feb 13 15:28:57.138147 containerd[1484]: time="2025-02-13T15:28:57.137295601Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:57.140185 containerd[1484]: time="2025-02-13T15:28:57.140137015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 15:28:57.142167 containerd[1484]: time="2025-02-13T15:28:57.142139178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 451.92846ms" Feb 13 15:28:57.142312 containerd[1484]: time="2025-02-13T15:28:57.142291467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Feb 13 15:28:57.143020 containerd[1484]: time="2025-02-13T15:28:57.142983709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:28:57.147970 containerd[1484]: time="2025-02-13T15:28:57.147689837Z" level=info msg="CreateContainer within sandbox \"bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 15:28:57.213870 containerd[1484]: time="2025-02-13T15:28:57.213825283Z" level=info msg="CreateContainer within sandbox \"bd8a00fe7ea3dd1bfe437dbe962bd9c09100bff9023d474a688998e87605e23b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0a1f05e01cbdd64339ec4c93ce406c98dd7c2caa6a4af99a09d8d737c40be1b5\"" Feb 13 15:28:57.216329 containerd[1484]: time="2025-02-13T15:28:57.215425981Z" level=info msg="StartContainer for \"0a1f05e01cbdd64339ec4c93ce406c98dd7c2caa6a4af99a09d8d737c40be1b5\"" Feb 13 15:28:57.253177 systemd[1]: Started cri-containerd-0a1f05e01cbdd64339ec4c93ce406c98dd7c2caa6a4af99a09d8d737c40be1b5.scope - libcontainer container 0a1f05e01cbdd64339ec4c93ce406c98dd7c2caa6a4af99a09d8d737c40be1b5. Feb 13 15:28:57.316620 containerd[1484]: time="2025-02-13T15:28:57.316577130Z" level=info msg="StartContainer for \"0a1f05e01cbdd64339ec4c93ce406c98dd7c2caa6a4af99a09d8d737c40be1b5\" returns successfully" Feb 13 15:28:58.286497 kubelet[2829]: I0213 15:28:58.286362 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-96f496cb-flt7v" podStartSLOduration=18.530193231 podStartE2EDuration="21.286315554s" podCreationTimestamp="2025-02-13 15:28:37 +0000 UTC" firstStartedPulling="2025-02-13 15:28:53.93105377 +0000 UTC m=+39.584677161" lastFinishedPulling="2025-02-13 15:28:56.687176053 +0000 UTC m=+42.340799484" observedRunningTime="2025-02-13 15:28:57.270947978 +0000 UTC m=+42.924571409" watchObservedRunningTime="2025-02-13 15:28:58.286315554 +0000 UTC m=+43.939938945" Feb 13 15:28:58.505453 kubelet[2829]: I0213 15:28:58.504721 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-96f496cb-jtsr2" podStartSLOduration=18.614145896 podStartE2EDuration="21.504633944s" podCreationTimestamp="2025-02-13 15:28:37 +0000 UTC" firstStartedPulling="2025-02-13 15:28:54.252363653 +0000 UTC m=+39.905987084" lastFinishedPulling="2025-02-13 15:28:57.142851621 +0000 UTC m=+42.796475132" observedRunningTime="2025-02-13 15:28:58.285998454 +0000 UTC m=+43.939621885" watchObservedRunningTime="2025-02-13 15:28:58.504633944 +0000 UTC m=+44.158257455" Feb 13 15:28:58.723910 containerd[1484]: time="2025-02-13T15:28:58.723837989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:58.725160 containerd[1484]: time="2025-02-13T15:28:58.725107027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 15:28:58.727105 containerd[1484]: time="2025-02-13T15:28:58.725968520Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:58.728581 containerd[1484]: time="2025-02-13T15:28:58.728545719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:58.729294 containerd[1484]: time="2025-02-13T15:28:58.729245842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.58622073s" Feb 13 15:28:58.729398 containerd[1484]: time="2025-02-13T15:28:58.729294245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 15:28:58.730021 containerd[1484]: time="2025-02-13T15:28:58.729984567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 15:28:58.734456 containerd[1484]: time="2025-02-13T15:28:58.734415360Z" level=info msg="CreateContainer within sandbox \"00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:28:58.757504 containerd[1484]: time="2025-02-13T15:28:58.757455537Z" level=info msg="CreateContainer within sandbox \"00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"de163ec7202aafda5a5e4120be6843f2dc1d84c9ec58e6282e551b0cf4124dda\"" Feb 13 15:28:58.758300 containerd[1484]: time="2025-02-13T15:28:58.758253666Z" level=info msg="StartContainer for \"de163ec7202aafda5a5e4120be6843f2dc1d84c9ec58e6282e551b0cf4124dda\"" Feb 13 15:28:58.796585 systemd[1]: Started cri-containerd-de163ec7202aafda5a5e4120be6843f2dc1d84c9ec58e6282e551b0cf4124dda.scope - libcontainer container de163ec7202aafda5a5e4120be6843f2dc1d84c9ec58e6282e551b0cf4124dda. Feb 13 15:28:58.831367 containerd[1484]: time="2025-02-13T15:28:58.831239076Z" level=info msg="StartContainer for \"de163ec7202aafda5a5e4120be6843f2dc1d84c9ec58e6282e551b0cf4124dda\" returns successfully" Feb 13 15:28:59.274337 kubelet[2829]: I0213 15:28:59.274250 2829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:29:00.645383 containerd[1484]: time="2025-02-13T15:29:00.644348036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:29:00.645383 containerd[1484]: time="2025-02-13T15:29:00.645299896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Feb 13 15:29:00.646227 containerd[1484]: time="2025-02-13T15:29:00.646159869Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:29:00.648802 containerd[1484]: time="2025-02-13T15:29:00.648741830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:29:00.649854 containerd[1484]: time="2025-02-13T15:29:00.649420512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.919403622s" Feb 13 15:29:00.649854 containerd[1484]: time="2025-02-13T15:29:00.649471035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Feb 13 15:29:00.651311 containerd[1484]: time="2025-02-13T15:29:00.651218064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:29:00.689469 containerd[1484]: time="2025-02-13T15:29:00.689288510Z" level=info msg="CreateContainer within sandbox \"4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 15:29:00.734289 containerd[1484]: time="2025-02-13T15:29:00.734233464Z" level=info msg="CreateContainer within sandbox \"4ce6c8c60f0ffbc4ef29af7dc03c8b05365d7be9cfb3f118177c60703faa4ee3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c9531a8c3c4344611b70d764acae2ed44e7845614e1c040176e2f9f9b6776a93\"" Feb 13 15:29:00.737390 containerd[1484]: time="2025-02-13T15:29:00.735123999Z" level=info msg="StartContainer for \"c9531a8c3c4344611b70d764acae2ed44e7845614e1c040176e2f9f9b6776a93\"" Feb 13 15:29:00.795556 systemd[1]: Started cri-containerd-c9531a8c3c4344611b70d764acae2ed44e7845614e1c040176e2f9f9b6776a93.scope - libcontainer container c9531a8c3c4344611b70d764acae2ed44e7845614e1c040176e2f9f9b6776a93. Feb 13 15:29:00.836478 containerd[1484]: time="2025-02-13T15:29:00.836397415Z" level=info msg="StartContainer for \"c9531a8c3c4344611b70d764acae2ed44e7845614e1c040176e2f9f9b6776a93\" returns successfully" Feb 13 15:29:01.304447 kubelet[2829]: I0213 15:29:01.304400 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5dfcc87966-z4lb7" podStartSLOduration=18.217208162 podStartE2EDuration="24.304356038s" podCreationTimestamp="2025-02-13 15:28:37 +0000 UTC" firstStartedPulling="2025-02-13 15:28:54.563204374 +0000 UTC m=+40.216827805" lastFinishedPulling="2025-02-13 15:29:00.65035225 +0000 UTC m=+46.303975681" observedRunningTime="2025-02-13 15:29:01.304327596 +0000 UTC m=+46.957951027" watchObservedRunningTime="2025-02-13 15:29:01.304356038 +0000 UTC m=+46.957979429" Feb 13 15:29:02.365366 containerd[1484]: time="2025-02-13T15:29:02.365223659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:29:02.368328 containerd[1484]: time="2025-02-13T15:29:02.367556445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 15:29:02.368328 containerd[1484]: time="2025-02-13T15:29:02.367621169Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:29:02.372151 containerd[1484]: time="2025-02-13T15:29:02.372069088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:29:02.373125 containerd[1484]: time="2025-02-13T15:29:02.372984386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.72172808s" Feb 13 15:29:02.373125 containerd[1484]: time="2025-02-13T15:29:02.373017348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 15:29:02.377753 containerd[1484]: time="2025-02-13T15:29:02.377618637Z" level=info msg="CreateContainer within sandbox \"00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:29:02.405239 containerd[1484]: time="2025-02-13T15:29:02.405134684Z" level=info msg="CreateContainer within sandbox \"00c57b481b6b486b874763d2821d41bd6ef07d4b67bb901a4baaf1824f802bc9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"161340240a79bdf05d887e995ce2b6c266bf68ec725680f7cfb0ba2ef611d571\"" Feb 13 15:29:02.407418 containerd[1484]: time="2025-02-13T15:29:02.406152828Z" level=info msg="StartContainer for \"161340240a79bdf05d887e995ce2b6c266bf68ec725680f7cfb0ba2ef611d571\"" Feb 13 15:29:02.457025 systemd[1]: Started cri-containerd-161340240a79bdf05d887e995ce2b6c266bf68ec725680f7cfb0ba2ef611d571.scope - libcontainer container 161340240a79bdf05d887e995ce2b6c266bf68ec725680f7cfb0ba2ef611d571. Feb 13 15:29:02.502491 containerd[1484]: time="2025-02-13T15:29:02.502411150Z" level=info msg="StartContainer for \"161340240a79bdf05d887e995ce2b6c266bf68ec725680f7cfb0ba2ef611d571\" returns successfully" Feb 13 15:29:02.615460 kubelet[2829]: I0213 15:29:02.615216 2829 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:29:02.615460 kubelet[2829]: I0213 15:29:02.615257 2829 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:29:03.315445 kubelet[2829]: I0213 15:29:03.315194 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-ngp8v" podStartSLOduration=18.394911401 podStartE2EDuration="26.315145854s" podCreationTimestamp="2025-02-13 15:28:37 +0000 UTC" firstStartedPulling="2025-02-13 15:28:54.453671191 +0000 UTC m=+40.107294622" lastFinishedPulling="2025-02-13 15:29:02.373905644 +0000 UTC m=+48.027529075" observedRunningTime="2025-02-13 15:29:03.314318722 +0000 UTC m=+48.967942313" watchObservedRunningTime="2025-02-13 15:29:03.315145854 +0000 UTC m=+48.968769325" Feb 13 15:29:14.468992 containerd[1484]: time="2025-02-13T15:29:14.468938917Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\"" Feb 13 15:29:14.469573 containerd[1484]: time="2025-02-13T15:29:14.469085326Z" level=info msg="TearDown network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" successfully" Feb 13 15:29:14.469573 containerd[1484]: time="2025-02-13T15:29:14.469100287Z" level=info msg="StopPodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" returns successfully" Feb 13 15:29:14.469877 containerd[1484]: time="2025-02-13T15:29:14.469853537Z" level=info msg="RemovePodSandbox for \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\"" Feb 13 15:29:14.469926 containerd[1484]: time="2025-02-13T15:29:14.469888059Z" level=info msg="Forcibly stopping sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\"" Feb 13 15:29:14.470003 containerd[1484]: time="2025-02-13T15:29:14.469986386Z" level=info msg="TearDown network for sandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" successfully" Feb 13 15:29:14.474445 containerd[1484]: time="2025-02-13T15:29:14.474357033Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.474445 containerd[1484]: time="2025-02-13T15:29:14.474443118Z" level=info msg="RemovePodSandbox \"bdd0c188cd606d4e907186dacdaf294c9bde54fa8e674617e265f0cae5eb39ec\" returns successfully" Feb 13 15:29:14.475170 containerd[1484]: time="2025-02-13T15:29:14.474948152Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\"" Feb 13 15:29:14.475170 containerd[1484]: time="2025-02-13T15:29:14.475041998Z" level=info msg="TearDown network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" successfully" Feb 13 15:29:14.475170 containerd[1484]: time="2025-02-13T15:29:14.475051198Z" level=info msg="StopPodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" returns successfully" Feb 13 15:29:14.475871 containerd[1484]: time="2025-02-13T15:29:14.475790767Z" level=info msg="RemovePodSandbox for \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\"" Feb 13 15:29:14.475871 containerd[1484]: time="2025-02-13T15:29:14.475867372Z" level=info msg="Forcibly stopping sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\"" Feb 13 15:29:14.476026 containerd[1484]: time="2025-02-13T15:29:14.475928256Z" level=info msg="TearDown network for sandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" successfully" Feb 13 15:29:14.479190 containerd[1484]: time="2025-02-13T15:29:14.479144707Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.479311 containerd[1484]: time="2025-02-13T15:29:14.479206191Z" level=info msg="RemovePodSandbox \"a780f3f7d79f7d8bf102722897df58c74b5108e508712b8174a6abead4b9f871\" returns successfully" Feb 13 15:29:14.479667 containerd[1484]: time="2025-02-13T15:29:14.479641980Z" level=info msg="StopPodSandbox for \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\"" Feb 13 15:29:14.480164 containerd[1484]: time="2025-02-13T15:29:14.479924759Z" level=info msg="TearDown network for sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" successfully" Feb 13 15:29:14.480164 containerd[1484]: time="2025-02-13T15:29:14.479943400Z" level=info msg="StopPodSandbox for \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" returns successfully" Feb 13 15:29:14.480470 containerd[1484]: time="2025-02-13T15:29:14.480329545Z" level=info msg="RemovePodSandbox for \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\"" Feb 13 15:29:14.480470 containerd[1484]: time="2025-02-13T15:29:14.480413311Z" level=info msg="Forcibly stopping sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\"" Feb 13 15:29:14.480608 containerd[1484]: time="2025-02-13T15:29:14.480548240Z" level=info msg="TearDown network for sandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" successfully" Feb 13 15:29:14.485421 containerd[1484]: time="2025-02-13T15:29:14.485258749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.485421 containerd[1484]: time="2025-02-13T15:29:14.485341194Z" level=info msg="RemovePodSandbox \"f421483589d4552e63d851e8c8f60577139114ec21d1e4460cae547cbcc38e69\" returns successfully" Feb 13 15:29:14.486298 containerd[1484]: time="2025-02-13T15:29:14.486059482Z" level=info msg="StopPodSandbox for \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\"" Feb 13 15:29:14.486298 containerd[1484]: time="2025-02-13T15:29:14.486195971Z" level=info msg="TearDown network for sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\" successfully" Feb 13 15:29:14.486298 containerd[1484]: time="2025-02-13T15:29:14.486209291Z" level=info msg="StopPodSandbox for \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\" returns successfully" Feb 13 15:29:14.486807 containerd[1484]: time="2025-02-13T15:29:14.486780569Z" level=info msg="RemovePodSandbox for \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\"" Feb 13 15:29:14.486943 containerd[1484]: time="2025-02-13T15:29:14.486924658Z" level=info msg="Forcibly stopping sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\"" Feb 13 15:29:14.487388 containerd[1484]: time="2025-02-13T15:29:14.487229318Z" level=info msg="TearDown network for sandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\" successfully" Feb 13 15:29:14.492669 containerd[1484]: time="2025-02-13T15:29:14.492533107Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.492852 containerd[1484]: time="2025-02-13T15:29:14.492804805Z" level=info msg="RemovePodSandbox \"3558edfeb18fc45899f4532b06af94c8ccf095c7119a4d1a29a1413ee53efaff\" returns successfully" Feb 13 15:29:14.494320 containerd[1484]: time="2025-02-13T15:29:14.494261420Z" level=info msg="StopPodSandbox for \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\"" Feb 13 15:29:14.494440 containerd[1484]: time="2025-02-13T15:29:14.494420911Z" level=info msg="TearDown network for sandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\" successfully" Feb 13 15:29:14.494493 containerd[1484]: time="2025-02-13T15:29:14.494437792Z" level=info msg="StopPodSandbox for \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\" returns successfully" Feb 13 15:29:14.494860 containerd[1484]: time="2025-02-13T15:29:14.494836378Z" level=info msg="RemovePodSandbox for \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\"" Feb 13 15:29:14.495106 containerd[1484]: time="2025-02-13T15:29:14.494951346Z" level=info msg="Forcibly stopping sandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\"" Feb 13 15:29:14.495106 containerd[1484]: time="2025-02-13T15:29:14.495042432Z" level=info msg="TearDown network for sandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\" successfully" Feb 13 15:29:14.500196 containerd[1484]: time="2025-02-13T15:29:14.500101884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.500196 containerd[1484]: time="2025-02-13T15:29:14.500192570Z" level=info msg="RemovePodSandbox \"0d9df269bbabfadd020ac707ac09bf846c3d9b4c9cf233f34e317dbf3106d498\" returns successfully" Feb 13 15:29:14.500932 containerd[1484]: time="2025-02-13T15:29:14.500867574Z" level=info msg="StopPodSandbox for \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\"" Feb 13 15:29:14.501187 containerd[1484]: time="2025-02-13T15:29:14.500989222Z" level=info msg="TearDown network for sandbox \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\" successfully" Feb 13 15:29:14.501187 containerd[1484]: time="2025-02-13T15:29:14.501002263Z" level=info msg="StopPodSandbox for \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\" returns successfully" Feb 13 15:29:14.502751 containerd[1484]: time="2025-02-13T15:29:14.501457013Z" level=info msg="RemovePodSandbox for \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\"" Feb 13 15:29:14.502751 containerd[1484]: time="2025-02-13T15:29:14.501499856Z" level=info msg="Forcibly stopping sandbox \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\"" Feb 13 15:29:14.502751 containerd[1484]: time="2025-02-13T15:29:14.501618464Z" level=info msg="TearDown network for sandbox \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\" successfully" Feb 13 15:29:14.506841 containerd[1484]: time="2025-02-13T15:29:14.506795724Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.507073 containerd[1484]: time="2025-02-13T15:29:14.507049420Z" level=info msg="RemovePodSandbox \"d04d72133f5f8bf9d2c7531d68119a63e6115757bceb05b21bbdd4f9aaf105d6\" returns successfully" Feb 13 15:29:14.507996 containerd[1484]: time="2025-02-13T15:29:14.507962480Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\"" Feb 13 15:29:14.508127 containerd[1484]: time="2025-02-13T15:29:14.508079888Z" level=info msg="TearDown network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" successfully" Feb 13 15:29:14.508127 containerd[1484]: time="2025-02-13T15:29:14.508092129Z" level=info msg="StopPodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" returns successfully" Feb 13 15:29:14.508685 containerd[1484]: time="2025-02-13T15:29:14.508481275Z" level=info msg="RemovePodSandbox for \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\"" Feb 13 15:29:14.508685 containerd[1484]: time="2025-02-13T15:29:14.508511357Z" level=info msg="Forcibly stopping sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\"" Feb 13 15:29:14.508685 containerd[1484]: time="2025-02-13T15:29:14.508647685Z" level=info msg="TearDown network for sandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" successfully" Feb 13 15:29:14.512585 containerd[1484]: time="2025-02-13T15:29:14.512515140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.512703 containerd[1484]: time="2025-02-13T15:29:14.512628547Z" level=info msg="RemovePodSandbox \"2441a2e991e8403a607a55a235397ec3ab9cffaa7f03d3602d528cb47489babe\" returns successfully" Feb 13 15:29:14.513387 containerd[1484]: time="2025-02-13T15:29:14.513076856Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\"" Feb 13 15:29:14.513387 containerd[1484]: time="2025-02-13T15:29:14.513181263Z" level=info msg="TearDown network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" successfully" Feb 13 15:29:14.513387 containerd[1484]: time="2025-02-13T15:29:14.513191824Z" level=info msg="StopPodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" returns successfully" Feb 13 15:29:14.513798 containerd[1484]: time="2025-02-13T15:29:14.513749061Z" level=info msg="RemovePodSandbox for \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\"" Feb 13 15:29:14.513798 containerd[1484]: time="2025-02-13T15:29:14.513783943Z" level=info msg="Forcibly stopping sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\"" Feb 13 15:29:14.513888 containerd[1484]: time="2025-02-13T15:29:14.513855628Z" level=info msg="TearDown network for sandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" successfully" Feb 13 15:29:14.517003 containerd[1484]: time="2025-02-13T15:29:14.516966712Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.517101 containerd[1484]: time="2025-02-13T15:29:14.517030796Z" level=info msg="RemovePodSandbox \"2352bc2924486d89ebc1d148f57a20e8e29c1e6a5cb5437c9ea541b6831affdb\" returns successfully" Feb 13 15:29:14.517621 containerd[1484]: time="2025-02-13T15:29:14.517454384Z" level=info msg="StopPodSandbox for \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\"" Feb 13 15:29:14.517621 containerd[1484]: time="2025-02-13T15:29:14.517551910Z" level=info msg="TearDown network for sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" successfully" Feb 13 15:29:14.517621 containerd[1484]: time="2025-02-13T15:29:14.517562831Z" level=info msg="StopPodSandbox for \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" returns successfully" Feb 13 15:29:14.518402 containerd[1484]: time="2025-02-13T15:29:14.518167231Z" level=info msg="RemovePodSandbox for \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\"" Feb 13 15:29:14.518402 containerd[1484]: time="2025-02-13T15:29:14.518196513Z" level=info msg="Forcibly stopping sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\"" Feb 13 15:29:14.518402 containerd[1484]: time="2025-02-13T15:29:14.518297879Z" level=info msg="TearDown network for sandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" successfully" Feb 13 15:29:14.522042 containerd[1484]: time="2025-02-13T15:29:14.521775468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.522042 containerd[1484]: time="2025-02-13T15:29:14.521846313Z" level=info msg="RemovePodSandbox \"f4bf60130edfa2031c4baab719917c17b7b6cd9f9c41ea5884aa4e225fe32690\" returns successfully" Feb 13 15:29:14.522799 containerd[1484]: time="2025-02-13T15:29:14.522386668Z" level=info msg="StopPodSandbox for \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\"" Feb 13 15:29:14.522799 containerd[1484]: time="2025-02-13T15:29:14.522479834Z" level=info msg="TearDown network for sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\" successfully" Feb 13 15:29:14.522799 containerd[1484]: time="2025-02-13T15:29:14.522489115Z" level=info msg="StopPodSandbox for \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\" returns successfully" Feb 13 15:29:14.522925 containerd[1484]: time="2025-02-13T15:29:14.522878380Z" level=info msg="RemovePodSandbox for \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\"" Feb 13 15:29:14.522925 containerd[1484]: time="2025-02-13T15:29:14.522907422Z" level=info msg="Forcibly stopping sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\"" Feb 13 15:29:14.523090 containerd[1484]: time="2025-02-13T15:29:14.522983547Z" level=info msg="TearDown network for sandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\" successfully" Feb 13 15:29:14.526955 containerd[1484]: time="2025-02-13T15:29:14.526877883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.526955 containerd[1484]: time="2025-02-13T15:29:14.526953208Z" level=info msg="RemovePodSandbox \"4243586e170af5a71ac0376998d6789614b05f3642ab7d62450ae5cd719e2071\" returns successfully" Feb 13 15:29:14.527596 containerd[1484]: time="2025-02-13T15:29:14.527537126Z" level=info msg="StopPodSandbox for \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\"" Feb 13 15:29:14.527666 containerd[1484]: time="2025-02-13T15:29:14.527626852Z" level=info msg="TearDown network for sandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\" successfully" Feb 13 15:29:14.527666 containerd[1484]: time="2025-02-13T15:29:14.527638013Z" level=info msg="StopPodSandbox for \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\" returns successfully" Feb 13 15:29:14.528149 containerd[1484]: time="2025-02-13T15:29:14.527905871Z" level=info msg="RemovePodSandbox for \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\"" Feb 13 15:29:14.528149 containerd[1484]: time="2025-02-13T15:29:14.527927432Z" level=info msg="Forcibly stopping sandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\"" Feb 13 15:29:14.528149 containerd[1484]: time="2025-02-13T15:29:14.527985716Z" level=info msg="TearDown network for sandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\" successfully" Feb 13 15:29:14.531392 containerd[1484]: time="2025-02-13T15:29:14.531257651Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.531694 containerd[1484]: time="2025-02-13T15:29:14.531401500Z" level=info msg="RemovePodSandbox \"1dd0a27fa2980fb32fa0cad446c8fd372289845116e33029ab15c5c16793b94d\" returns successfully" Feb 13 15:29:14.532973 containerd[1484]: time="2025-02-13T15:29:14.532294559Z" level=info msg="StopPodSandbox for \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\"" Feb 13 15:29:14.532973 containerd[1484]: time="2025-02-13T15:29:14.532551896Z" level=info msg="TearDown network for sandbox \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\" successfully" Feb 13 15:29:14.532973 containerd[1484]: time="2025-02-13T15:29:14.532580098Z" level=info msg="StopPodSandbox for \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\" returns successfully" Feb 13 15:29:14.533682 containerd[1484]: time="2025-02-13T15:29:14.533643728Z" level=info msg="RemovePodSandbox for \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\"" Feb 13 15:29:14.533787 containerd[1484]: time="2025-02-13T15:29:14.533747854Z" level=info msg="Forcibly stopping sandbox \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\"" Feb 13 15:29:14.533842 containerd[1484]: time="2025-02-13T15:29:14.533820019Z" level=info msg="TearDown network for sandbox \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\" successfully" Feb 13 15:29:14.537218 containerd[1484]: time="2025-02-13T15:29:14.537103955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.537218 containerd[1484]: time="2025-02-13T15:29:14.537168559Z" level=info msg="RemovePodSandbox \"bcc3b953582117dcc64191f84d98e96c5e9770dd8160a214ab818a0c1d45838b\" returns successfully" Feb 13 15:29:14.537680 containerd[1484]: time="2025-02-13T15:29:14.537642470Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\"" Feb 13 15:29:14.537740 containerd[1484]: time="2025-02-13T15:29:14.537729756Z" level=info msg="TearDown network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" successfully" Feb 13 15:29:14.537784 containerd[1484]: time="2025-02-13T15:29:14.537740437Z" level=info msg="StopPodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" returns successfully" Feb 13 15:29:14.538382 containerd[1484]: time="2025-02-13T15:29:14.538228389Z" level=info msg="RemovePodSandbox for \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\"" Feb 13 15:29:14.538382 containerd[1484]: time="2025-02-13T15:29:14.538254430Z" level=info msg="Forcibly stopping sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\"" Feb 13 15:29:14.538382 containerd[1484]: time="2025-02-13T15:29:14.538338596Z" level=info msg="TearDown network for sandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" successfully" Feb 13 15:29:14.541421 containerd[1484]: time="2025-02-13T15:29:14.541359995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.541659 containerd[1484]: time="2025-02-13T15:29:14.541434519Z" level=info msg="RemovePodSandbox \"df60ac7f9c4f4ce3ec5d4fd591fd27d7c46b446b232cbd8fe77ca3bcb4ca8451\" returns successfully" Feb 13 15:29:14.542650 containerd[1484]: time="2025-02-13T15:29:14.542099323Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\"" Feb 13 15:29:14.542650 containerd[1484]: time="2025-02-13T15:29:14.542238452Z" level=info msg="TearDown network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" successfully" Feb 13 15:29:14.542650 containerd[1484]: time="2025-02-13T15:29:14.542255933Z" level=info msg="StopPodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" returns successfully" Feb 13 15:29:14.543196 containerd[1484]: time="2025-02-13T15:29:14.543055146Z" level=info msg="RemovePodSandbox for \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\"" Feb 13 15:29:14.543196 containerd[1484]: time="2025-02-13T15:29:14.543087068Z" level=info msg="Forcibly stopping sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\"" Feb 13 15:29:14.543196 containerd[1484]: time="2025-02-13T15:29:14.543151712Z" level=info msg="TearDown network for sandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" successfully" Feb 13 15:29:14.546432 containerd[1484]: time="2025-02-13T15:29:14.546305439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.546432 containerd[1484]: time="2025-02-13T15:29:14.546391805Z" level=info msg="RemovePodSandbox \"1f1a1d3defbab3690773a521ea39442d1f9ab48a5e4d73f041c09eec5ba604f8\" returns successfully" Feb 13 15:29:14.546899 containerd[1484]: time="2025-02-13T15:29:14.546856156Z" level=info msg="StopPodSandbox for \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\"" Feb 13 15:29:14.547045 containerd[1484]: time="2025-02-13T15:29:14.547022206Z" level=info msg="TearDown network for sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" successfully" Feb 13 15:29:14.547045 containerd[1484]: time="2025-02-13T15:29:14.547037928Z" level=info msg="StopPodSandbox for \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" returns successfully" Feb 13 15:29:14.547383 containerd[1484]: time="2025-02-13T15:29:14.547331227Z" level=info msg="RemovePodSandbox for \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\"" Feb 13 15:29:14.547473 containerd[1484]: time="2025-02-13T15:29:14.547390591Z" level=info msg="Forcibly stopping sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\"" Feb 13 15:29:14.547473 containerd[1484]: time="2025-02-13T15:29:14.547451435Z" level=info msg="TearDown network for sandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" successfully" Feb 13 15:29:14.550446 containerd[1484]: time="2025-02-13T15:29:14.550399788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.550538 containerd[1484]: time="2025-02-13T15:29:14.550457192Z" level=info msg="RemovePodSandbox \"48e07194dbb0da347721d98a38dc7413906db0b34ec2565d79e5d7f46512884e\" returns successfully" Feb 13 15:29:14.550845 containerd[1484]: time="2025-02-13T15:29:14.550801055Z" level=info msg="StopPodSandbox for \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\"" Feb 13 15:29:14.550903 containerd[1484]: time="2025-02-13T15:29:14.550889821Z" level=info msg="TearDown network for sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\" successfully" Feb 13 15:29:14.550903 containerd[1484]: time="2025-02-13T15:29:14.550900981Z" level=info msg="StopPodSandbox for \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\" returns successfully" Feb 13 15:29:14.551236 containerd[1484]: time="2025-02-13T15:29:14.551195761Z" level=info msg="RemovePodSandbox for \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\"" Feb 13 15:29:14.551236 containerd[1484]: time="2025-02-13T15:29:14.551224363Z" level=info msg="Forcibly stopping sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\"" Feb 13 15:29:14.551329 containerd[1484]: time="2025-02-13T15:29:14.551294527Z" level=info msg="TearDown network for sandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\" successfully" Feb 13 15:29:14.553954 containerd[1484]: time="2025-02-13T15:29:14.553911619Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.554061 containerd[1484]: time="2025-02-13T15:29:14.553969703Z" level=info msg="RemovePodSandbox \"1daf8c6993fa45e4d2a255c3c67c8230bc5966f3ac93bb3339346ad6e9619554\" returns successfully" Feb 13 15:29:14.554690 containerd[1484]: time="2025-02-13T15:29:14.554497538Z" level=info msg="StopPodSandbox for \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\"" Feb 13 15:29:14.554690 containerd[1484]: time="2025-02-13T15:29:14.554603705Z" level=info msg="TearDown network for sandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\" successfully" Feb 13 15:29:14.554690 containerd[1484]: time="2025-02-13T15:29:14.554616425Z" level=info msg="StopPodSandbox for \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\" returns successfully" Feb 13 15:29:14.555035 containerd[1484]: time="2025-02-13T15:29:14.554985330Z" level=info msg="RemovePodSandbox for \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\"" Feb 13 15:29:14.555035 containerd[1484]: time="2025-02-13T15:29:14.555013291Z" level=info msg="Forcibly stopping sandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\"" Feb 13 15:29:14.555115 containerd[1484]: time="2025-02-13T15:29:14.555076296Z" level=info msg="TearDown network for sandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\" successfully" Feb 13 15:29:14.557862 containerd[1484]: time="2025-02-13T15:29:14.557824636Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.557932 containerd[1484]: time="2025-02-13T15:29:14.557882520Z" level=info msg="RemovePodSandbox \"94d3ecfe64812dd33dda54174f0bc89a183c5a8ecc6867ce86697f10a1d261e0\" returns successfully" Feb 13 15:29:14.558300 containerd[1484]: time="2025-02-13T15:29:14.558261865Z" level=info msg="StopPodSandbox for \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\"" Feb 13 15:29:14.558726 containerd[1484]: time="2025-02-13T15:29:14.558634729Z" level=info msg="TearDown network for sandbox \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\" successfully" Feb 13 15:29:14.558726 containerd[1484]: time="2025-02-13T15:29:14.558657371Z" level=info msg="StopPodSandbox for \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\" returns successfully" Feb 13 15:29:14.559014 containerd[1484]: time="2025-02-13T15:29:14.558969871Z" level=info msg="RemovePodSandbox for \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\"" Feb 13 15:29:14.559014 containerd[1484]: time="2025-02-13T15:29:14.559004794Z" level=info msg="Forcibly stopping sandbox \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\"" Feb 13 15:29:14.559094 containerd[1484]: time="2025-02-13T15:29:14.559074358Z" level=info msg="TearDown network for sandbox \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\" successfully" Feb 13 15:29:14.561752 containerd[1484]: time="2025-02-13T15:29:14.561721012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.561822 containerd[1484]: time="2025-02-13T15:29:14.561776856Z" level=info msg="RemovePodSandbox \"43df26cf11bd8ab3576ecee9aa5e3bcc1ea91689abc68d4948538580af1e3801\" returns successfully" Feb 13 15:29:14.562230 containerd[1484]: time="2025-02-13T15:29:14.562133959Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\"" Feb 13 15:29:14.562366 containerd[1484]: time="2025-02-13T15:29:14.562255967Z" level=info msg="TearDown network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" successfully" Feb 13 15:29:14.562366 containerd[1484]: time="2025-02-13T15:29:14.562295290Z" level=info msg="StopPodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" returns successfully" Feb 13 15:29:14.562589 containerd[1484]: time="2025-02-13T15:29:14.562568148Z" level=info msg="RemovePodSandbox for \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\"" Feb 13 15:29:14.562703 containerd[1484]: time="2025-02-13T15:29:14.562659914Z" level=info msg="Forcibly stopping sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\"" Feb 13 15:29:14.562805 containerd[1484]: time="2025-02-13T15:29:14.562777362Z" level=info msg="TearDown network for sandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" successfully" Feb 13 15:29:14.565440 containerd[1484]: time="2025-02-13T15:29:14.565323449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.565440 containerd[1484]: time="2025-02-13T15:29:14.565389933Z" level=info msg="RemovePodSandbox \"0311efb633ecd8aeeaefc4e997dd5e2cbd5d1c2c2373b16486777e333757b5d3\" returns successfully" Feb 13 15:29:14.566026 containerd[1484]: time="2025-02-13T15:29:14.565830962Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\"" Feb 13 15:29:14.566026 containerd[1484]: time="2025-02-13T15:29:14.565922568Z" level=info msg="TearDown network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" successfully" Feb 13 15:29:14.566026 containerd[1484]: time="2025-02-13T15:29:14.565932849Z" level=info msg="StopPodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" returns successfully" Feb 13 15:29:14.566796 containerd[1484]: time="2025-02-13T15:29:14.566158024Z" level=info msg="RemovePodSandbox for \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\"" Feb 13 15:29:14.566796 containerd[1484]: time="2025-02-13T15:29:14.566185225Z" level=info msg="Forcibly stopping sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\"" Feb 13 15:29:14.566796 containerd[1484]: time="2025-02-13T15:29:14.566447003Z" level=info msg="TearDown network for sandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" successfully" Feb 13 15:29:14.570978 containerd[1484]: time="2025-02-13T15:29:14.570935017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.571071 containerd[1484]: time="2025-02-13T15:29:14.570996301Z" level=info msg="RemovePodSandbox \"33ebf64e28d29d0fe6b5d40bac6142688327a2a2bbc9deae9d4cfab32d8c2633\" returns successfully" Feb 13 15:29:14.575625 containerd[1484]: time="2025-02-13T15:29:14.575586923Z" level=info msg="StopPodSandbox for \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\"" Feb 13 15:29:14.575746 containerd[1484]: time="2025-02-13T15:29:14.575700130Z" level=info msg="TearDown network for sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" successfully" Feb 13 15:29:14.575746 containerd[1484]: time="2025-02-13T15:29:14.575710491Z" level=info msg="StopPodSandbox for \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" returns successfully" Feb 13 15:29:14.577301 containerd[1484]: time="2025-02-13T15:29:14.576175482Z" level=info msg="RemovePodSandbox for \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\"" Feb 13 15:29:14.577301 containerd[1484]: time="2025-02-13T15:29:14.576201963Z" level=info msg="Forcibly stopping sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\"" Feb 13 15:29:14.577301 containerd[1484]: time="2025-02-13T15:29:14.576296090Z" level=info msg="TearDown network for sandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" successfully" Feb 13 15:29:14.581878 containerd[1484]: time="2025-02-13T15:29:14.581840734Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.582040 containerd[1484]: time="2025-02-13T15:29:14.582023386Z" level=info msg="RemovePodSandbox \"3f541ea97385f59f3a1e790072d683d885254bac9a8001a48c8a661ed90e70ee\" returns successfully" Feb 13 15:29:14.582609 containerd[1484]: time="2025-02-13T15:29:14.582585863Z" level=info msg="StopPodSandbox for \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\"" Feb 13 15:29:14.582699 containerd[1484]: time="2025-02-13T15:29:14.582682989Z" level=info msg="TearDown network for sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\" successfully" Feb 13 15:29:14.582732 containerd[1484]: time="2025-02-13T15:29:14.582697030Z" level=info msg="StopPodSandbox for \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\" returns successfully" Feb 13 15:29:14.583154 containerd[1484]: time="2025-02-13T15:29:14.583101137Z" level=info msg="RemovePodSandbox for \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\"" Feb 13 15:29:14.583154 containerd[1484]: time="2025-02-13T15:29:14.583144900Z" level=info msg="Forcibly stopping sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\"" Feb 13 15:29:14.583280 containerd[1484]: time="2025-02-13T15:29:14.583246466Z" level=info msg="TearDown network for sandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\" successfully" Feb 13 15:29:14.588897 containerd[1484]: time="2025-02-13T15:29:14.588815272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.588897 containerd[1484]: time="2025-02-13T15:29:14.588893837Z" level=info msg="RemovePodSandbox \"04fda1060187fb3ab5ea07eb8553fcc4416e2e2ffd23647667d834203d5de579\" returns successfully" Feb 13 15:29:14.589787 containerd[1484]: time="2025-02-13T15:29:14.589597243Z" level=info msg="StopPodSandbox for \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\"" Feb 13 15:29:14.589787 containerd[1484]: time="2025-02-13T15:29:14.589773695Z" level=info msg="TearDown network for sandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\" successfully" Feb 13 15:29:14.589787 containerd[1484]: time="2025-02-13T15:29:14.589790616Z" level=info msg="StopPodSandbox for \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\" returns successfully" Feb 13 15:29:14.590601 containerd[1484]: time="2025-02-13T15:29:14.590461060Z" level=info msg="RemovePodSandbox for \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\"" Feb 13 15:29:14.590601 containerd[1484]: time="2025-02-13T15:29:14.590520264Z" level=info msg="Forcibly stopping sandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\"" Feb 13 15:29:14.595305 containerd[1484]: time="2025-02-13T15:29:14.594678857Z" level=info msg="TearDown network for sandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\" successfully" Feb 13 15:29:14.599042 containerd[1484]: time="2025-02-13T15:29:14.598981020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.599158 containerd[1484]: time="2025-02-13T15:29:14.599059505Z" level=info msg="RemovePodSandbox \"7e3d926031fbd47d63e1011a598117fc509cac834dda10df3845be8979f6f492\" returns successfully" Feb 13 15:29:14.599710 containerd[1484]: time="2025-02-13T15:29:14.599686026Z" level=info msg="StopPodSandbox for \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\"" Feb 13 15:29:14.600175 containerd[1484]: time="2025-02-13T15:29:14.599978125Z" level=info msg="TearDown network for sandbox \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\" successfully" Feb 13 15:29:14.600175 containerd[1484]: time="2025-02-13T15:29:14.599995087Z" level=info msg="StopPodSandbox for \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\" returns successfully" Feb 13 15:29:14.600332 containerd[1484]: time="2025-02-13T15:29:14.600256624Z" level=info msg="RemovePodSandbox for \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\"" Feb 13 15:29:14.600332 containerd[1484]: time="2025-02-13T15:29:14.600295906Z" level=info msg="Forcibly stopping sandbox \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\"" Feb 13 15:29:14.600425 containerd[1484]: time="2025-02-13T15:29:14.600380912Z" level=info msg="TearDown network for sandbox \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\" successfully" Feb 13 15:29:14.602891 containerd[1484]: time="2025-02-13T15:29:14.602856395Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.602965 containerd[1484]: time="2025-02-13T15:29:14.602917999Z" level=info msg="RemovePodSandbox \"2df71ba4151bbbf440c2dbe721c22705432f671693a9ad2ab0efc45c3ffc3bda\" returns successfully" Feb 13 15:29:14.603845 containerd[1484]: time="2025-02-13T15:29:14.603453314Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\"" Feb 13 15:29:14.603845 containerd[1484]: time="2025-02-13T15:29:14.603612044Z" level=info msg="TearDown network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" successfully" Feb 13 15:29:14.603845 containerd[1484]: time="2025-02-13T15:29:14.603626365Z" level=info msg="StopPodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" returns successfully" Feb 13 15:29:14.604283 containerd[1484]: time="2025-02-13T15:29:14.604255206Z" level=info msg="RemovePodSandbox for \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\"" Feb 13 15:29:14.604526 containerd[1484]: time="2025-02-13T15:29:14.604369174Z" level=info msg="Forcibly stopping sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\"" Feb 13 15:29:14.604526 containerd[1484]: time="2025-02-13T15:29:14.604475781Z" level=info msg="TearDown network for sandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" successfully" Feb 13 15:29:14.608213 containerd[1484]: time="2025-02-13T15:29:14.608172544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.608715 containerd[1484]: time="2025-02-13T15:29:14.608411719Z" level=info msg="RemovePodSandbox \"9384a745d80d105f06b68b0648cefdabe6bd415ecbb67189e93aa6274f43edaa\" returns successfully" Feb 13 15:29:14.609150 containerd[1484]: time="2025-02-13T15:29:14.609116126Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\"" Feb 13 15:29:14.609447 containerd[1484]: time="2025-02-13T15:29:14.609381263Z" level=info msg="TearDown network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" successfully" Feb 13 15:29:14.609497 containerd[1484]: time="2025-02-13T15:29:14.609450188Z" level=info msg="StopPodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" returns successfully" Feb 13 15:29:14.610038 containerd[1484]: time="2025-02-13T15:29:14.609972542Z" level=info msg="RemovePodSandbox for \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\"" Feb 13 15:29:14.610038 containerd[1484]: time="2025-02-13T15:29:14.610027666Z" level=info msg="Forcibly stopping sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\"" Feb 13 15:29:14.610534 containerd[1484]: time="2025-02-13T15:29:14.610492576Z" level=info msg="TearDown network for sandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" successfully" Feb 13 15:29:14.613696 containerd[1484]: time="2025-02-13T15:29:14.613656464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.613800 containerd[1484]: time="2025-02-13T15:29:14.613726509Z" level=info msg="RemovePodSandbox \"0860313509079ae74a3c7a688016ef56f6505e000a086083cbf346677ce8aac4\" returns successfully" Feb 13 15:29:14.614332 containerd[1484]: time="2025-02-13T15:29:14.614112214Z" level=info msg="StopPodSandbox for \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\"" Feb 13 15:29:14.614332 containerd[1484]: time="2025-02-13T15:29:14.614234302Z" level=info msg="TearDown network for sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" successfully" Feb 13 15:29:14.614332 containerd[1484]: time="2025-02-13T15:29:14.614249543Z" level=info msg="StopPodSandbox for \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" returns successfully" Feb 13 15:29:14.614648 containerd[1484]: time="2025-02-13T15:29:14.614560883Z" level=info msg="RemovePodSandbox for \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\"" Feb 13 15:29:14.614648 containerd[1484]: time="2025-02-13T15:29:14.614596126Z" level=info msg="Forcibly stopping sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\"" Feb 13 15:29:14.614776 containerd[1484]: time="2025-02-13T15:29:14.614676491Z" level=info msg="TearDown network for sandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" successfully" Feb 13 15:29:14.618259 containerd[1484]: time="2025-02-13T15:29:14.618195882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.618438 containerd[1484]: time="2025-02-13T15:29:14.618291008Z" level=info msg="RemovePodSandbox \"54589bd2b10e149367440a33f5bf334a76f6bcaeb760a81ddde63f046abd7343\" returns successfully" Feb 13 15:29:14.619048 containerd[1484]: time="2025-02-13T15:29:14.618790081Z" level=info msg="StopPodSandbox for \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\"" Feb 13 15:29:14.619048 containerd[1484]: time="2025-02-13T15:29:14.618883367Z" level=info msg="TearDown network for sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\" successfully" Feb 13 15:29:14.619048 containerd[1484]: time="2025-02-13T15:29:14.618893288Z" level=info msg="StopPodSandbox for \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\" returns successfully" Feb 13 15:29:14.619365 containerd[1484]: time="2025-02-13T15:29:14.619334557Z" level=info msg="RemovePodSandbox for \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\"" Feb 13 15:29:14.620480 containerd[1484]: time="2025-02-13T15:29:14.619410602Z" level=info msg="Forcibly stopping sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\"" Feb 13 15:29:14.620480 containerd[1484]: time="2025-02-13T15:29:14.619480567Z" level=info msg="TearDown network for sandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\" successfully" Feb 13 15:29:14.622854 containerd[1484]: time="2025-02-13T15:29:14.622816666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.622921 containerd[1484]: time="2025-02-13T15:29:14.622883470Z" level=info msg="RemovePodSandbox \"d34945f320b21a70b86fcd3c6c64b6daac77d00f24ea90d2a5ccc05543d62492\" returns successfully" Feb 13 15:29:14.623292 containerd[1484]: time="2025-02-13T15:29:14.623227813Z" level=info msg="StopPodSandbox for \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\"" Feb 13 15:29:14.623469 containerd[1484]: time="2025-02-13T15:29:14.623451507Z" level=info msg="TearDown network for sandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\" successfully" Feb 13 15:29:14.623609 containerd[1484]: time="2025-02-13T15:29:14.623543194Z" level=info msg="StopPodSandbox for \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\" returns successfully" Feb 13 15:29:14.623984 containerd[1484]: time="2025-02-13T15:29:14.623960381Z" level=info msg="RemovePodSandbox for \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\"" Feb 13 15:29:14.624040 containerd[1484]: time="2025-02-13T15:29:14.623992383Z" level=info msg="Forcibly stopping sandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\"" Feb 13 15:29:14.624070 containerd[1484]: time="2025-02-13T15:29:14.624052867Z" level=info msg="TearDown network for sandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\" successfully" Feb 13 15:29:14.627359 containerd[1484]: time="2025-02-13T15:29:14.627285919Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.627441 containerd[1484]: time="2025-02-13T15:29:14.627377205Z" level=info msg="RemovePodSandbox \"d0d0e7be1b7dc238fd0f8a367573433245c312fa736ab65d067df25e474695b7\" returns successfully" Feb 13 15:29:14.627902 containerd[1484]: time="2025-02-13T15:29:14.627804953Z" level=info msg="StopPodSandbox for \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\"" Feb 13 15:29:14.627902 containerd[1484]: time="2025-02-13T15:29:14.627893839Z" level=info msg="TearDown network for sandbox \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\" successfully" Feb 13 15:29:14.628009 containerd[1484]: time="2025-02-13T15:29:14.627905400Z" level=info msg="StopPodSandbox for \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\" returns successfully" Feb 13 15:29:14.628640 containerd[1484]: time="2025-02-13T15:29:14.628431875Z" level=info msg="RemovePodSandbox for \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\"" Feb 13 15:29:14.628806 containerd[1484]: time="2025-02-13T15:29:14.628462837Z" level=info msg="Forcibly stopping sandbox \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\"" Feb 13 15:29:14.628995 containerd[1484]: time="2025-02-13T15:29:14.628915946Z" level=info msg="TearDown network for sandbox \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\" successfully" Feb 13 15:29:14.633076 containerd[1484]: time="2025-02-13T15:29:14.633024696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.633726 containerd[1484]: time="2025-02-13T15:29:14.633319516Z" level=info msg="RemovePodSandbox \"a8ae6890a2a62c24bfc3473394eef7c6bc9652c50289bfd24171ce381fde3c9b\" returns successfully" Feb 13 15:29:14.633895 containerd[1484]: time="2025-02-13T15:29:14.633859991Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\"" Feb 13 15:29:14.634025 containerd[1484]: time="2025-02-13T15:29:14.633979719Z" level=info msg="TearDown network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" successfully" Feb 13 15:29:14.634025 containerd[1484]: time="2025-02-13T15:29:14.633997840Z" level=info msg="StopPodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" returns successfully" Feb 13 15:29:14.634607 containerd[1484]: time="2025-02-13T15:29:14.634579639Z" level=info msg="RemovePodSandbox for \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\"" Feb 13 15:29:14.634607 containerd[1484]: time="2025-02-13T15:29:14.634608720Z" level=info msg="Forcibly stopping sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\"" Feb 13 15:29:14.634743 containerd[1484]: time="2025-02-13T15:29:14.634677725Z" level=info msg="TearDown network for sandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" successfully" Feb 13 15:29:14.641460 containerd[1484]: time="2025-02-13T15:29:14.641416568Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.642173 containerd[1484]: time="2025-02-13T15:29:14.641914480Z" level=info msg="RemovePodSandbox \"114d1ce8d009b3c00222113b80959c6739b4d3eba17ef29319a8ea9f58ea3696\" returns successfully" Feb 13 15:29:14.642460 containerd[1484]: time="2025-02-13T15:29:14.642248262Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\"" Feb 13 15:29:14.642460 containerd[1484]: time="2025-02-13T15:29:14.642378671Z" level=info msg="TearDown network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" successfully" Feb 13 15:29:14.642460 containerd[1484]: time="2025-02-13T15:29:14.642389992Z" level=info msg="StopPodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" returns successfully" Feb 13 15:29:14.643218 containerd[1484]: time="2025-02-13T15:29:14.642940668Z" level=info msg="RemovePodSandbox for \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\"" Feb 13 15:29:14.643218 containerd[1484]: time="2025-02-13T15:29:14.642967870Z" level=info msg="Forcibly stopping sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\"" Feb 13 15:29:14.643218 containerd[1484]: time="2025-02-13T15:29:14.643057636Z" level=info msg="TearDown network for sandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" successfully" Feb 13 15:29:14.646535 containerd[1484]: time="2025-02-13T15:29:14.646484461Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.646615 containerd[1484]: time="2025-02-13T15:29:14.646552625Z" level=info msg="RemovePodSandbox \"41b4088674839c473c129f769f1ea6a2a65b82415f71f6133b6a483e70096d3c\" returns successfully" Feb 13 15:29:14.647323 containerd[1484]: time="2025-02-13T15:29:14.647061979Z" level=info msg="StopPodSandbox for \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\"" Feb 13 15:29:14.647323 containerd[1484]: time="2025-02-13T15:29:14.647179946Z" level=info msg="TearDown network for sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" successfully" Feb 13 15:29:14.647323 containerd[1484]: time="2025-02-13T15:29:14.647193987Z" level=info msg="StopPodSandbox for \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" returns successfully" Feb 13 15:29:14.648084 containerd[1484]: time="2025-02-13T15:29:14.647890313Z" level=info msg="RemovePodSandbox for \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\"" Feb 13 15:29:14.648084 containerd[1484]: time="2025-02-13T15:29:14.648002720Z" level=info msg="Forcibly stopping sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\"" Feb 13 15:29:14.649305 containerd[1484]: time="2025-02-13T15:29:14.648458510Z" level=info msg="TearDown network for sandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" successfully" Feb 13 15:29:14.652667 containerd[1484]: time="2025-02-13T15:29:14.652630344Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.652845 containerd[1484]: time="2025-02-13T15:29:14.652825197Z" level=info msg="RemovePodSandbox \"ac801046c84630d8772d6fae236ba6239787842669d98efc24ade33836735b0f\" returns successfully" Feb 13 15:29:14.653435 containerd[1484]: time="2025-02-13T15:29:14.653341151Z" level=info msg="StopPodSandbox for \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\"" Feb 13 15:29:14.653559 containerd[1484]: time="2025-02-13T15:29:14.653491081Z" level=info msg="TearDown network for sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\" successfully" Feb 13 15:29:14.653559 containerd[1484]: time="2025-02-13T15:29:14.653507882Z" level=info msg="StopPodSandbox for \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\" returns successfully" Feb 13 15:29:14.654251 containerd[1484]: time="2025-02-13T15:29:14.654205128Z" level=info msg="RemovePodSandbox for \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\"" Feb 13 15:29:14.654469 containerd[1484]: time="2025-02-13T15:29:14.654263412Z" level=info msg="Forcibly stopping sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\"" Feb 13 15:29:14.654469 containerd[1484]: time="2025-02-13T15:29:14.654447024Z" level=info msg="TearDown network for sandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\" successfully" Feb 13 15:29:14.658923 containerd[1484]: time="2025-02-13T15:29:14.658883355Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.659027 containerd[1484]: time="2025-02-13T15:29:14.658952400Z" level=info msg="RemovePodSandbox \"173276624f3d3a2c76a43280c04b0be6421f906381864d80bbf90294bc04bc7a\" returns successfully" Feb 13 15:29:14.659764 containerd[1484]: time="2025-02-13T15:29:14.659442712Z" level=info msg="StopPodSandbox for \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\"" Feb 13 15:29:14.659764 containerd[1484]: time="2025-02-13T15:29:14.659545399Z" level=info msg="TearDown network for sandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\" successfully" Feb 13 15:29:14.659764 containerd[1484]: time="2025-02-13T15:29:14.659555639Z" level=info msg="StopPodSandbox for \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\" returns successfully" Feb 13 15:29:14.660370 containerd[1484]: time="2025-02-13T15:29:14.660312369Z" level=info msg="RemovePodSandbox for \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\"" Feb 13 15:29:14.660370 containerd[1484]: time="2025-02-13T15:29:14.660338491Z" level=info msg="Forcibly stopping sandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\"" Feb 13 15:29:14.660599 containerd[1484]: time="2025-02-13T15:29:14.660518823Z" level=info msg="TearDown network for sandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\" successfully" Feb 13 15:29:14.668812 containerd[1484]: time="2025-02-13T15:29:14.668762964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.669145 containerd[1484]: time="2025-02-13T15:29:14.668838529Z" level=info msg="RemovePodSandbox \"1880384c0f1e80af394d0ae17dafc2c33ac56554b5c28736cb2114c043ad2cb8\" returns successfully" Feb 13 15:29:14.669540 containerd[1484]: time="2025-02-13T15:29:14.669479211Z" level=info msg="StopPodSandbox for \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\"" Feb 13 15:29:14.669672 containerd[1484]: time="2025-02-13T15:29:14.669606780Z" level=info msg="TearDown network for sandbox \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\" successfully" Feb 13 15:29:14.669672 containerd[1484]: time="2025-02-13T15:29:14.669619861Z" level=info msg="StopPodSandbox for \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\" returns successfully" Feb 13 15:29:14.670296 containerd[1484]: time="2025-02-13T15:29:14.670189578Z" level=info msg="RemovePodSandbox for \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\"" Feb 13 15:29:14.670426 containerd[1484]: time="2025-02-13T15:29:14.670259463Z" level=info msg="Forcibly stopping sandbox \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\"" Feb 13 15:29:14.670521 containerd[1484]: time="2025-02-13T15:29:14.670504439Z" level=info msg="TearDown network for sandbox \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\" successfully" Feb 13 15:29:14.674519 containerd[1484]: time="2025-02-13T15:29:14.674479580Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:29:14.674785 containerd[1484]: time="2025-02-13T15:29:14.674675873Z" level=info msg="RemovePodSandbox \"0bbb23d80736bd7a5f5b5ac9a3456b4d2daf706f7331ecac4c64e75ae685dac7\" returns successfully" Feb 13 15:29:41.906203 kubelet[2829]: I0213 15:29:41.905849 2829 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:31:53.684431 systemd[1]: run-containerd-runc-k8s.io-6b9df4424472527ed3c22aed72caae48c6b3aa593978cdba399deb71f54e0315-runc.ftSgaN.mount: Deactivated successfully. Feb 13 15:33:06.142690 systemd[1]: Started sshd@7-188.245.200.94:22-139.178.89.65:40398.service - OpenSSH per-connection server daemon (139.178.89.65:40398). Feb 13 15:33:07.140342 sshd[6256]: Accepted publickey for core from 139.178.89.65 port 40398 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:07.142420 sshd-session[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:07.151513 systemd-logind[1461]: New session 8 of user core. Feb 13 15:33:07.154471 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:33:07.914257 sshd[6258]: Connection closed by 139.178.89.65 port 40398 Feb 13 15:33:07.915167 sshd-session[6256]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:07.920966 systemd[1]: sshd@7-188.245.200.94:22-139.178.89.65:40398.service: Deactivated successfully. Feb 13 15:33:07.924215 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:33:07.925448 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:33:07.926896 systemd-logind[1461]: Removed session 8. Feb 13 15:33:13.095883 systemd[1]: Started sshd@8-188.245.200.94:22-139.178.89.65:40410.service - OpenSSH per-connection server daemon (139.178.89.65:40410). Feb 13 15:33:14.096368 sshd[6289]: Accepted publickey for core from 139.178.89.65 port 40410 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:14.098673 sshd-session[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:14.109065 systemd-logind[1461]: New session 9 of user core. Feb 13 15:33:14.114623 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:33:14.871385 sshd[6291]: Connection closed by 139.178.89.65 port 40410 Feb 13 15:33:14.872009 sshd-session[6289]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:14.876894 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:33:14.877172 systemd[1]: sshd@8-188.245.200.94:22-139.178.89.65:40410.service: Deactivated successfully. Feb 13 15:33:14.881153 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:33:14.882917 systemd-logind[1461]: Removed session 9. Feb 13 15:33:20.047738 systemd[1]: Started sshd@9-188.245.200.94:22-139.178.89.65:41102.service - OpenSSH per-connection server daemon (139.178.89.65:41102). Feb 13 15:33:21.025204 sshd[6326]: Accepted publickey for core from 139.178.89.65 port 41102 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:21.027464 sshd-session[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:21.036541 systemd-logind[1461]: New session 10 of user core. Feb 13 15:33:21.041537 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:33:21.779956 sshd[6328]: Connection closed by 139.178.89.65 port 41102 Feb 13 15:33:21.780883 sshd-session[6326]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:21.787047 systemd[1]: sshd@9-188.245.200.94:22-139.178.89.65:41102.service: Deactivated successfully. Feb 13 15:33:21.790416 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:33:21.792688 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:33:21.794026 systemd-logind[1461]: Removed session 10. Feb 13 15:33:21.955036 systemd[1]: Started sshd@10-188.245.200.94:22-139.178.89.65:41106.service - OpenSSH per-connection server daemon (139.178.89.65:41106). Feb 13 15:33:22.935209 sshd[6340]: Accepted publickey for core from 139.178.89.65 port 41106 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:22.936753 sshd-session[6340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:22.941942 systemd-logind[1461]: New session 11 of user core. Feb 13 15:33:22.947581 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:33:23.690035 systemd[1]: run-containerd-runc-k8s.io-6b9df4424472527ed3c22aed72caae48c6b3aa593978cdba399deb71f54e0315-runc.fmHgko.mount: Deactivated successfully. Feb 13 15:33:23.747309 sshd[6346]: Connection closed by 139.178.89.65 port 41106 Feb 13 15:33:23.747529 sshd-session[6340]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:23.755257 systemd[1]: sshd@10-188.245.200.94:22-139.178.89.65:41106.service: Deactivated successfully. Feb 13 15:33:23.758099 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:33:23.765617 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:33:23.767515 systemd-logind[1461]: Removed session 11. Feb 13 15:33:23.925734 systemd[1]: Started sshd@11-188.245.200.94:22-139.178.89.65:41110.service - OpenSSH per-connection server daemon (139.178.89.65:41110). Feb 13 15:33:24.921320 sshd[6376]: Accepted publickey for core from 139.178.89.65 port 41110 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:24.924513 sshd-session[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:24.930192 systemd-logind[1461]: New session 12 of user core. Feb 13 15:33:24.935545 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:33:25.681478 sshd[6378]: Connection closed by 139.178.89.65 port 41110 Feb 13 15:33:25.682650 sshd-session[6376]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:25.687065 systemd[1]: sshd@11-188.245.200.94:22-139.178.89.65:41110.service: Deactivated successfully. Feb 13 15:33:25.688998 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:33:25.690976 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:33:25.692912 systemd-logind[1461]: Removed session 12. Feb 13 15:33:30.853618 systemd[1]: Started sshd@12-188.245.200.94:22-139.178.89.65:43702.service - OpenSSH per-connection server daemon (139.178.89.65:43702). Feb 13 15:33:31.837432 sshd[6390]: Accepted publickey for core from 139.178.89.65 port 43702 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:31.839594 sshd-session[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:31.844587 systemd-logind[1461]: New session 13 of user core. Feb 13 15:33:31.850648 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:33:32.588363 sshd[6392]: Connection closed by 139.178.89.65 port 43702 Feb 13 15:33:32.589235 sshd-session[6390]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:32.594591 systemd[1]: sshd@12-188.245.200.94:22-139.178.89.65:43702.service: Deactivated successfully. Feb 13 15:33:32.597140 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:33:32.600510 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:33:32.601818 systemd-logind[1461]: Removed session 13. Feb 13 15:33:32.766599 systemd[1]: Started sshd@13-188.245.200.94:22-139.178.89.65:43710.service - OpenSSH per-connection server daemon (139.178.89.65:43710). Feb 13 15:33:33.751779 sshd[6405]: Accepted publickey for core from 139.178.89.65 port 43710 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:33.753955 sshd-session[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:33.760143 systemd-logind[1461]: New session 14 of user core. Feb 13 15:33:33.764502 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:33:34.658160 sshd[6407]: Connection closed by 139.178.89.65 port 43710 Feb 13 15:33:34.659787 sshd-session[6405]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:34.666814 systemd[1]: sshd@13-188.245.200.94:22-139.178.89.65:43710.service: Deactivated successfully. Feb 13 15:33:34.671244 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:33:34.672587 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:33:34.674897 systemd-logind[1461]: Removed session 14. Feb 13 15:33:34.833833 systemd[1]: Started sshd@14-188.245.200.94:22-139.178.89.65:43202.service - OpenSSH per-connection server daemon (139.178.89.65:43202). Feb 13 15:33:35.821723 sshd[6416]: Accepted publickey for core from 139.178.89.65 port 43202 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:35.823733 sshd-session[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:35.829792 systemd-logind[1461]: New session 15 of user core. Feb 13 15:33:35.835514 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:33:38.584335 sshd[6418]: Connection closed by 139.178.89.65 port 43202 Feb 13 15:33:38.587019 sshd-session[6416]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:38.591873 systemd[1]: sshd@14-188.245.200.94:22-139.178.89.65:43202.service: Deactivated successfully. Feb 13 15:33:38.595155 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:33:38.597487 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:33:38.599200 systemd-logind[1461]: Removed session 15. Feb 13 15:33:38.761715 systemd[1]: Started sshd@15-188.245.200.94:22-139.178.89.65:43212.service - OpenSSH per-connection server daemon (139.178.89.65:43212). Feb 13 15:33:39.759315 sshd[6452]: Accepted publickey for core from 139.178.89.65 port 43212 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:39.764425 sshd-session[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:39.775795 systemd-logind[1461]: New session 16 of user core. Feb 13 15:33:39.779497 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:33:40.665793 sshd[6454]: Connection closed by 139.178.89.65 port 43212 Feb 13 15:33:40.668358 sshd-session[6452]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:40.672791 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:33:40.673112 systemd[1]: sshd@15-188.245.200.94:22-139.178.89.65:43212.service: Deactivated successfully. Feb 13 15:33:40.676054 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:33:40.682936 systemd-logind[1461]: Removed session 16. Feb 13 15:33:40.843643 systemd[1]: Started sshd@16-188.245.200.94:22-139.178.89.65:43214.service - OpenSSH per-connection server daemon (139.178.89.65:43214). Feb 13 15:33:41.827340 sshd[6464]: Accepted publickey for core from 139.178.89.65 port 43214 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:41.828899 sshd-session[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:41.835376 systemd-logind[1461]: New session 17 of user core. Feb 13 15:33:41.843238 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:33:42.496409 systemd[1]: Started sshd@17-188.245.200.94:22-79.104.0.82:35836.service - OpenSSH per-connection server daemon (79.104.0.82:35836). Feb 13 15:33:42.586980 sshd[6466]: Connection closed by 139.178.89.65 port 43214 Feb 13 15:33:42.587901 sshd-session[6464]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:42.593889 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:33:42.594032 systemd[1]: sshd@16-188.245.200.94:22-139.178.89.65:43214.service: Deactivated successfully. Feb 13 15:33:42.597862 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:33:42.600530 systemd-logind[1461]: Removed session 17. Feb 13 15:33:42.863080 sshd[6474]: Invalid user dbuser from 79.104.0.82 port 35836 Feb 13 15:33:42.928505 sshd[6474]: Received disconnect from 79.104.0.82 port 35836:11: Bye Bye [preauth] Feb 13 15:33:42.928505 sshd[6474]: Disconnected from invalid user dbuser 79.104.0.82 port 35836 [preauth] Feb 13 15:33:42.931188 systemd[1]: sshd@17-188.245.200.94:22-79.104.0.82:35836.service: Deactivated successfully. Feb 13 15:33:47.765714 systemd[1]: Started sshd@18-188.245.200.94:22-139.178.89.65:34986.service - OpenSSH per-connection server daemon (139.178.89.65:34986). Feb 13 15:33:48.751648 sshd[6504]: Accepted publickey for core from 139.178.89.65 port 34986 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:48.755053 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:48.760598 systemd-logind[1461]: New session 18 of user core. Feb 13 15:33:48.763495 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:33:49.503828 sshd[6506]: Connection closed by 139.178.89.65 port 34986 Feb 13 15:33:49.504841 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:49.511410 systemd[1]: sshd@18-188.245.200.94:22-139.178.89.65:34986.service: Deactivated successfully. Feb 13 15:33:49.514044 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:33:49.515931 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:33:49.517007 systemd-logind[1461]: Removed session 18. Feb 13 15:33:54.683018 systemd[1]: Started sshd@19-188.245.200.94:22-139.178.89.65:34992.service - OpenSSH per-connection server daemon (139.178.89.65:34992). Feb 13 15:33:55.668947 sshd[6538]: Accepted publickey for core from 139.178.89.65 port 34992 ssh2: RSA SHA256:gOK0CXwz1tCyASFR4SaHsklqtVIpFWLnYpEIAs0TKRk Feb 13 15:33:55.671051 sshd-session[6538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:33:55.678481 systemd-logind[1461]: New session 19 of user core. Feb 13 15:33:55.683432 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:33:56.432824 sshd[6540]: Connection closed by 139.178.89.65 port 34992 Feb 13 15:33:56.434007 sshd-session[6538]: pam_unix(sshd:session): session closed for user core Feb 13 15:33:56.437553 systemd[1]: sshd@19-188.245.200.94:22-139.178.89.65:34992.service: Deactivated successfully. Feb 13 15:33:56.439470 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:33:56.441072 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:33:56.442825 systemd-logind[1461]: Removed session 19. Feb 13 15:34:11.377113 systemd[1]: cri-containerd-3d20fb4aee0cfddab6cbc4c23604d128ec7a95ad773f57db4d73293311242774.scope: Deactivated successfully. Feb 13 15:34:11.378486 systemd[1]: cri-containerd-3d20fb4aee0cfddab6cbc4c23604d128ec7a95ad773f57db4d73293311242774.scope: Consumed 6.224s CPU time, 20.3M memory peak, 0B memory swap peak. Feb 13 15:34:11.405380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d20fb4aee0cfddab6cbc4c23604d128ec7a95ad773f57db4d73293311242774-rootfs.mount: Deactivated successfully. Feb 13 15:34:11.406019 containerd[1484]: time="2025-02-13T15:34:11.405961664Z" level=info msg="shim disconnected" id=3d20fb4aee0cfddab6cbc4c23604d128ec7a95ad773f57db4d73293311242774 namespace=k8s.io Feb 13 15:34:11.406019 containerd[1484]: time="2025-02-13T15:34:11.406016262Z" level=warning msg="cleaning up after shim disconnected" id=3d20fb4aee0cfddab6cbc4c23604d128ec7a95ad773f57db4d73293311242774 namespace=k8s.io Feb 13 15:34:11.406019 containerd[1484]: time="2025-02-13T15:34:11.406023942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:11.588981 systemd[1]: cri-containerd-171a8fa965a64046c7c91379db0df94a942767041277b0a94289d6215ddf1294.scope: Deactivated successfully. Feb 13 15:34:11.591584 kubelet[2829]: E0213 15:34:11.590549 2829 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34324->10.0.0.2:2379: read: connection timed out" Feb 13 15:34:11.589253 systemd[1]: cri-containerd-171a8fa965a64046c7c91379db0df94a942767041277b0a94289d6215ddf1294.scope: Consumed 2.249s CPU time, 16.1M memory peak, 0B memory swap peak. Feb 13 15:34:11.620115 containerd[1484]: time="2025-02-13T15:34:11.619908098Z" level=info msg="shim disconnected" id=171a8fa965a64046c7c91379db0df94a942767041277b0a94289d6215ddf1294 namespace=k8s.io Feb 13 15:34:11.620115 containerd[1484]: time="2025-02-13T15:34:11.619966376Z" level=warning msg="cleaning up after shim disconnected" id=171a8fa965a64046c7c91379db0df94a942767041277b0a94289d6215ddf1294 namespace=k8s.io Feb 13 15:34:11.620115 containerd[1484]: time="2025-02-13T15:34:11.619976255Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:11.623502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-171a8fa965a64046c7c91379db0df94a942767041277b0a94289d6215ddf1294-rootfs.mount: Deactivated successfully. Feb 13 15:34:12.179632 kubelet[2829]: I0213 15:34:12.179160 2829 scope.go:117] "RemoveContainer" containerID="171a8fa965a64046c7c91379db0df94a942767041277b0a94289d6215ddf1294" Feb 13 15:34:12.179632 kubelet[2829]: I0213 15:34:12.179255 2829 scope.go:117] "RemoveContainer" containerID="3d20fb4aee0cfddab6cbc4c23604d128ec7a95ad773f57db4d73293311242774" Feb 13 15:34:12.182868 containerd[1484]: time="2025-02-13T15:34:12.182831982Z" level=info msg="CreateContainer within sandbox \"4fccf78622aed89abf8ca9ee58d391c55348c756e96b21ea21fb67ab501e769c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 13 15:34:12.185145 containerd[1484]: time="2025-02-13T15:34:12.184853833Z" level=info msg="CreateContainer within sandbox \"aa629ccbb098ed0fb79f6e75c65194ce1a5d862d397ead819d41c3059151ce98\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 13 15:34:12.206553 containerd[1484]: time="2025-02-13T15:34:12.206502301Z" level=info msg="CreateContainer within sandbox \"4fccf78622aed89abf8ca9ee58d391c55348c756e96b21ea21fb67ab501e769c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"04658eb8545ba7dc90ab6938ee2dea9dd5b5022a8ba08bf3ffea161fa206c972\"" Feb 13 15:34:12.207081 containerd[1484]: time="2025-02-13T15:34:12.207052522Z" level=info msg="StartContainer for \"04658eb8545ba7dc90ab6938ee2dea9dd5b5022a8ba08bf3ffea161fa206c972\"" Feb 13 15:34:12.210863 containerd[1484]: time="2025-02-13T15:34:12.209933265Z" level=info msg="CreateContainer within sandbox \"aa629ccbb098ed0fb79f6e75c65194ce1a5d862d397ead819d41c3059151ce98\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2f1bc22e66e751ecb06f5de4c525d0457b2ef6d4f60441139465d79f6fb81edc\"" Feb 13 15:34:12.211388 containerd[1484]: time="2025-02-13T15:34:12.211364656Z" level=info msg="StartContainer for \"2f1bc22e66e751ecb06f5de4c525d0457b2ef6d4f60441139465d79f6fb81edc\"" Feb 13 15:34:12.238477 systemd[1]: Started cri-containerd-04658eb8545ba7dc90ab6938ee2dea9dd5b5022a8ba08bf3ffea161fa206c972.scope - libcontainer container 04658eb8545ba7dc90ab6938ee2dea9dd5b5022a8ba08bf3ffea161fa206c972. Feb 13 15:34:12.246811 systemd[1]: Started cri-containerd-2f1bc22e66e751ecb06f5de4c525d0457b2ef6d4f60441139465d79f6fb81edc.scope - libcontainer container 2f1bc22e66e751ecb06f5de4c525d0457b2ef6d4f60441139465d79f6fb81edc. Feb 13 15:34:12.300068 containerd[1484]: time="2025-02-13T15:34:12.299947458Z" level=info msg="StartContainer for \"2f1bc22e66e751ecb06f5de4c525d0457b2ef6d4f60441139465d79f6fb81edc\" returns successfully" Feb 13 15:34:12.303716 containerd[1484]: time="2025-02-13T15:34:12.303475778Z" level=info msg="StartContainer for \"04658eb8545ba7dc90ab6938ee2dea9dd5b5022a8ba08bf3ffea161fa206c972\" returns successfully" Feb 13 15:34:12.344663 systemd[1]: cri-containerd-5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8.scope: Deactivated successfully. Feb 13 15:34:12.346415 systemd[1]: cri-containerd-5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8.scope: Consumed 7.039s CPU time. Feb 13 15:34:12.385410 containerd[1484]: time="2025-02-13T15:34:12.385333008Z" level=info msg="shim disconnected" id=5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8 namespace=k8s.io Feb 13 15:34:12.385819 containerd[1484]: time="2025-02-13T15:34:12.385597199Z" level=warning msg="cleaning up after shim disconnected" id=5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8 namespace=k8s.io Feb 13 15:34:12.385819 containerd[1484]: time="2025-02-13T15:34:12.385616558Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:12.410698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8-rootfs.mount: Deactivated successfully. Feb 13 15:34:13.188577 kubelet[2829]: I0213 15:34:13.185967 2829 scope.go:117] "RemoveContainer" containerID="5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8" Feb 13 15:34:13.190670 containerd[1484]: time="2025-02-13T15:34:13.190459867Z" level=info msg="CreateContainer within sandbox \"f9d93d9cf134b4d5909feacd60a4a6c4195695a2d8a6f4b5c8faf879cb49305d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 13 15:34:13.211351 containerd[1484]: time="2025-02-13T15:34:13.207874848Z" level=info msg="CreateContainer within sandbox \"f9d93d9cf134b4d5909feacd60a4a6c4195695a2d8a6f4b5c8faf879cb49305d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3c372176a79afd4095f38966a1c7a15fcad2171cf6342f24627d45ef139cb998\"" Feb 13 15:34:13.211351 containerd[1484]: time="2025-02-13T15:34:13.208881534Z" level=info msg="StartContainer for \"3c372176a79afd4095f38966a1c7a15fcad2171cf6342f24627d45ef139cb998\"" Feb 13 15:34:13.254592 systemd[1]: Started cri-containerd-3c372176a79afd4095f38966a1c7a15fcad2171cf6342f24627d45ef139cb998.scope - libcontainer container 3c372176a79afd4095f38966a1c7a15fcad2171cf6342f24627d45ef139cb998. Feb 13 15:34:13.570337 containerd[1484]: time="2025-02-13T15:34:13.569937963Z" level=info msg="StartContainer for \"3c372176a79afd4095f38966a1c7a15fcad2171cf6342f24627d45ef139cb998\" returns successfully" Feb 13 15:34:15.752642 kubelet[2829]: E0213 15:34:15.752588 2829 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:34106->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4152-2-1-1-73ff0440f7.1823ce6c39447a63 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4152-2-1-1-73ff0440f7,UID:d8f1926d9a7b1f00897cc97fdec61a27,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4152-2-1-1-73ff0440f7,},FirstTimestamp:2025-02-13 15:34:05.307722339 +0000 UTC m=+350.961345810,LastTimestamp:2025-02-13 15:34:05.307722339 +0000 UTC m=+350.961345810,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-1-1-73ff0440f7,}" Feb 13 15:34:17.032097 systemd[1]: cri-containerd-3c372176a79afd4095f38966a1c7a15fcad2171cf6342f24627d45ef139cb998.scope: Deactivated successfully. Feb 13 15:34:17.059224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c372176a79afd4095f38966a1c7a15fcad2171cf6342f24627d45ef139cb998-rootfs.mount: Deactivated successfully. Feb 13 15:34:17.062260 containerd[1484]: time="2025-02-13T15:34:17.062198642Z" level=info msg="shim disconnected" id=3c372176a79afd4095f38966a1c7a15fcad2171cf6342f24627d45ef139cb998 namespace=k8s.io Feb 13 15:34:17.062260 containerd[1484]: time="2025-02-13T15:34:17.062256920Z" level=warning msg="cleaning up after shim disconnected" id=3c372176a79afd4095f38966a1c7a15fcad2171cf6342f24627d45ef139cb998 namespace=k8s.io Feb 13 15:34:17.062935 containerd[1484]: time="2025-02-13T15:34:17.062264560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:34:17.075912 containerd[1484]: time="2025-02-13T15:34:17.075850139Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:34:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:34:17.206190 kubelet[2829]: I0213 15:34:17.206122 2829 scope.go:117] "RemoveContainer" containerID="5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8" Feb 13 15:34:17.206654 kubelet[2829]: I0213 15:34:17.206291 2829 scope.go:117] "RemoveContainer" containerID="3c372176a79afd4095f38966a1c7a15fcad2171cf6342f24627d45ef139cb998" Feb 13 15:34:17.206687 kubelet[2829]: E0213 15:34:17.206650 2829 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-c7ccbd65-4dlcr_tigera-operator(1c4dfc8c-6fae-40a6-8bc8-73f1ef2b24f3)\"" pod="tigera-operator/tigera-operator-c7ccbd65-4dlcr" podUID="1c4dfc8c-6fae-40a6-8bc8-73f1ef2b24f3" Feb 13 15:34:17.208539 containerd[1484]: time="2025-02-13T15:34:17.208051642Z" level=info msg="RemoveContainer for \"5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8\"" Feb 13 15:34:17.212603 containerd[1484]: time="2025-02-13T15:34:17.212432226Z" level=info msg="RemoveContainer for \"5307216d8172ccaf73e5cac05461f93ffa0223ad79dfb7d02236537b289f31f8\" returns successfully"