Jan 29 16:20:50.935741 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 16:20:50.935771 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Jan 29 14:53:00 -00 2025 Jan 29 16:20:50.935784 kernel: KASLR enabled Jan 29 16:20:50.935791 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 29 16:20:50.935798 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 29 16:20:50.935804 kernel: random: crng init done Jan 29 16:20:50.935839 kernel: secureboot: Secure boot disabled Jan 29 16:20:50.935847 kernel: ACPI: Early table checksum verification disabled Jan 29 16:20:50.935855 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 29 16:20:50.935865 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 29 16:20:50.935872 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:50.935878 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:50.935884 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:50.935890 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:50.935898 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:50.935906 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:50.935913 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:50.935919 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:50.935929 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 16:20:50.935935 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 29 16:20:50.935943 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 29 16:20:50.935951 kernel: NUMA: Failed to initialise from firmware Jan 29 16:20:50.935958 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 16:20:50.935965 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Jan 29 16:20:50.935971 kernel: Zone ranges: Jan 29 16:20:50.935980 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 29 16:20:50.935989 kernel: DMA32 empty Jan 29 16:20:50.935995 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 29 16:20:50.936003 kernel: Movable zone start for each node Jan 29 16:20:50.936010 kernel: Early memory node ranges Jan 29 16:20:50.936018 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 29 16:20:50.936025 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 29 16:20:50.936033 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 29 16:20:50.936041 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 29 16:20:50.936049 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 29 16:20:50.936056 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 29 16:20:50.936063 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 29 16:20:50.936073 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 29 16:20:50.936080 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 29 16:20:50.936086 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 29 16:20:50.936096 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 29 16:20:50.936103 kernel: psci: probing for conduit method from ACPI. Jan 29 16:20:50.936110 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 16:20:50.936118 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 16:20:50.936125 kernel: psci: Trusted OS migration not required Jan 29 16:20:50.936131 kernel: psci: SMC Calling Convention v1.1 Jan 29 16:20:50.936140 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 16:20:50.936147 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 16:20:50.936156 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 16:20:50.936178 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 29 16:20:50.936185 kernel: Detected PIPT I-cache on CPU0 Jan 29 16:20:50.936192 kernel: CPU features: detected: GIC system register CPU interface Jan 29 16:20:50.936200 kernel: CPU features: detected: Hardware dirty bit management Jan 29 16:20:50.936211 kernel: CPU features: detected: Spectre-v4 Jan 29 16:20:50.936219 kernel: CPU features: detected: Spectre-BHB Jan 29 16:20:50.936226 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 16:20:50.936235 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 16:20:50.936243 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 16:20:50.936250 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 16:20:50.936258 kernel: alternatives: applying boot alternatives Jan 29 16:20:50.936266 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:20:50.936275 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 16:20:50.936282 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 16:20:50.936290 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 16:20:50.936299 kernel: Fallback order for Node 0: 0 Jan 29 16:20:50.936306 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 29 16:20:50.936313 kernel: Policy zone: Normal Jan 29 16:20:50.936321 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 16:20:50.936328 kernel: software IO TLB: area num 2. Jan 29 16:20:50.936336 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 29 16:20:50.936345 kernel: Memory: 3883896K/4096000K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 212104K reserved, 0K cma-reserved) Jan 29 16:20:50.936352 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 16:20:50.936359 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 16:20:50.936367 kernel: rcu: RCU event tracing is enabled. Jan 29 16:20:50.936373 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 16:20:50.936381 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 16:20:50.936389 kernel: Tracing variant of Tasks RCU enabled. Jan 29 16:20:50.936396 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 16:20:50.936403 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 16:20:50.936410 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 16:20:50.936418 kernel: GICv3: 256 SPIs implemented Jan 29 16:20:50.936425 kernel: GICv3: 0 Extended SPIs implemented Jan 29 16:20:50.936433 kernel: Root IRQ handler: gic_handle_irq Jan 29 16:20:50.936440 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 16:20:50.936448 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 16:20:50.936456 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 16:20:50.936464 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 16:20:50.936474 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 16:20:50.936482 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 29 16:20:50.936490 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 29 16:20:50.936498 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 16:20:50.936505 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:20:50.936512 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 16:20:50.936520 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 16:20:50.936528 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 16:20:50.936537 kernel: Console: colour dummy device 80x25 Jan 29 16:20:50.936545 kernel: ACPI: Core revision 20230628 Jan 29 16:20:50.936554 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 16:20:50.936564 kernel: pid_max: default: 32768 minimum: 301 Jan 29 16:20:50.936572 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 16:20:50.936580 kernel: landlock: Up and running. Jan 29 16:20:50.936586 kernel: SELinux: Initializing. Jan 29 16:20:50.936593 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:20:50.936601 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 16:20:50.936608 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:20:50.936615 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 16:20:50.936624 kernel: rcu: Hierarchical SRCU implementation. Jan 29 16:20:50.936634 kernel: rcu: Max phase no-delay instances is 400. Jan 29 16:20:50.936641 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 16:20:50.936648 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 16:20:50.936656 kernel: Remapping and enabling EFI services. Jan 29 16:20:50.936664 kernel: smp: Bringing up secondary CPUs ... Jan 29 16:20:50.936673 kernel: Detected PIPT I-cache on CPU1 Jan 29 16:20:50.936681 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 16:20:50.936689 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 29 16:20:50.936697 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 16:20:50.936707 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 16:20:50.936715 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 16:20:50.936732 kernel: SMP: Total of 2 processors activated. Jan 29 16:20:50.936742 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 16:20:50.936750 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 16:20:50.936770 kernel: CPU features: detected: Common not Private translations Jan 29 16:20:50.936779 kernel: CPU features: detected: CRC32 instructions Jan 29 16:20:50.940922 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 16:20:50.940937 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 16:20:50.940954 kernel: CPU features: detected: LSE atomic instructions Jan 29 16:20:50.940963 kernel: CPU features: detected: Privileged Access Never Jan 29 16:20:50.940970 kernel: CPU features: detected: RAS Extension Support Jan 29 16:20:50.940979 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 16:20:50.940988 kernel: CPU: All CPU(s) started at EL1 Jan 29 16:20:50.940996 kernel: alternatives: applying system-wide alternatives Jan 29 16:20:50.941005 kernel: devtmpfs: initialized Jan 29 16:20:50.941015 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 16:20:50.941025 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 16:20:50.941032 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 16:20:50.941041 kernel: SMBIOS 3.0.0 present. Jan 29 16:20:50.941049 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 29 16:20:50.941058 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 16:20:50.941067 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 16:20:50.941074 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 16:20:50.941084 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 16:20:50.941092 kernel: audit: initializing netlink subsys (disabled) Jan 29 16:20:50.941102 kernel: audit: type=2000 audit(0.010:1): state=initialized audit_enabled=0 res=1 Jan 29 16:20:50.941112 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 16:20:50.941120 kernel: cpuidle: using governor menu Jan 29 16:20:50.941129 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 16:20:50.941137 kernel: ASID allocator initialised with 32768 entries Jan 29 16:20:50.941144 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 16:20:50.941154 kernel: Serial: AMBA PL011 UART driver Jan 29 16:20:50.941175 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 16:20:50.941186 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 16:20:50.941198 kernel: Modules: 509280 pages in range for PLT usage Jan 29 16:20:50.941206 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 16:20:50.941214 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 16:20:50.941222 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 16:20:50.941231 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 16:20:50.941240 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 16:20:50.941248 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 16:20:50.941256 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 16:20:50.941263 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 16:20:50.941274 kernel: ACPI: Added _OSI(Module Device) Jan 29 16:20:50.941281 kernel: ACPI: Added _OSI(Processor Device) Jan 29 16:20:50.941290 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 16:20:50.941298 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 16:20:50.941307 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 16:20:50.941315 kernel: ACPI: Interpreter enabled Jan 29 16:20:50.941323 kernel: ACPI: Using GIC for interrupt routing Jan 29 16:20:50.941337 kernel: ACPI: MCFG table detected, 1 entries Jan 29 16:20:50.941344 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 16:20:50.941355 kernel: printk: console [ttyAMA0] enabled Jan 29 16:20:50.941362 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 16:20:50.941555 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 16:20:50.941638 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 16:20:50.941716 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 16:20:50.941786 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 16:20:50.942975 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 16:20:50.943016 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 16:20:50.943024 kernel: PCI host bridge to bus 0000:00 Jan 29 16:20:50.943117 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 16:20:50.943215 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 16:20:50.943283 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 16:20:50.943344 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 16:20:50.943432 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 16:20:50.943518 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 29 16:20:50.943597 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 29 16:20:50.943671 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 16:20:50.943754 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:20:50.946895 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 29 16:20:50.947095 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 29 16:20:50.947273 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 29 16:20:50.947387 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 29 16:20:50.947477 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 29 16:20:50.947581 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 29 16:20:50.947730 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 29 16:20:50.947861 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 29 16:20:50.947966 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 29 16:20:50.948061 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 29 16:20:50.948136 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 29 16:20:50.948228 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 29 16:20:50.948302 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 29 16:20:50.948378 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 29 16:20:50.948448 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 29 16:20:50.948527 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 29 16:20:50.948599 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 29 16:20:50.948683 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 29 16:20:50.948754 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jan 29 16:20:50.950005 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:20:50.950111 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 29 16:20:50.950218 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 16:20:50.950292 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 16:20:50.950376 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 29 16:20:50.950451 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 29 16:20:50.950533 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 29 16:20:50.950606 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 29 16:20:50.950684 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 29 16:20:50.950770 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 29 16:20:50.951023 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 29 16:20:50.951115 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 29 16:20:50.951201 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Jan 29 16:20:50.951272 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 29 16:20:50.951349 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 29 16:20:50.951432 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 29 16:20:50.951503 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 16:20:50.951582 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 29 16:20:50.951655 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 29 16:20:50.951727 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 29 16:20:50.951798 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 29 16:20:50.952001 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 29 16:20:50.952082 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 29 16:20:50.952149 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 29 16:20:50.952274 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 29 16:20:50.952346 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 29 16:20:50.952412 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 29 16:20:50.952486 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 29 16:20:50.952564 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 29 16:20:50.952629 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 29 16:20:50.952701 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 29 16:20:50.952768 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 29 16:20:50.952950 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 29 16:20:50.953028 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 29 16:20:50.953109 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 29 16:20:50.953193 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 29 16:20:50.953275 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 16:20:50.953339 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 29 16:20:50.953405 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 29 16:20:50.953475 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 16:20:50.953541 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 29 16:20:50.953607 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 29 16:20:50.953678 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 16:20:50.953748 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 29 16:20:50.953826 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 29 16:20:50.953902 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 16:20:50.953969 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 29 16:20:50.954040 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 29 16:20:50.954109 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 29 16:20:50.954223 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 16:20:50.954305 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 29 16:20:50.954381 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 16:20:50.954451 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 29 16:20:50.954527 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 16:20:50.954598 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 29 16:20:50.954674 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 16:20:50.954747 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 29 16:20:50.954834 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 16:20:50.954907 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 29 16:20:50.954974 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 16:20:50.955065 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 29 16:20:50.955137 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 16:20:50.955223 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 29 16:20:50.955299 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 16:20:50.955374 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 29 16:20:50.955444 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 16:20:50.955516 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 29 16:20:50.955585 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 29 16:20:50.955653 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 29 16:20:50.955720 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 29 16:20:50.955789 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 29 16:20:50.955923 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 29 16:20:50.956002 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 29 16:20:50.956070 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 29 16:20:50.956139 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 29 16:20:50.956250 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 29 16:20:50.956325 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 29 16:20:50.956391 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 29 16:20:50.956459 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 29 16:20:50.956525 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 29 16:20:50.956599 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 29 16:20:50.956666 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 29 16:20:50.956790 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 29 16:20:50.956895 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 29 16:20:50.956970 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 29 16:20:50.957038 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 29 16:20:50.957111 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 29 16:20:50.957205 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 29 16:20:50.957309 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 16:20:50.957383 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 29 16:20:50.957457 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 29 16:20:50.957526 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 29 16:20:50.957592 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 29 16:20:50.957662 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 16:20:50.957739 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 29 16:20:50.957882 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 29 16:20:50.957964 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 29 16:20:50.958053 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 29 16:20:50.958122 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 16:20:50.958208 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 29 16:20:50.958285 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 29 16:20:50.958353 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 29 16:20:50.958432 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 29 16:20:50.958499 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 29 16:20:50.958565 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 16:20:50.958640 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 29 16:20:50.958710 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 29 16:20:50.958785 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 29 16:20:50.958884 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 29 16:20:50.958953 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 16:20:50.959030 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 29 16:20:50.959109 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Jan 29 16:20:50.959224 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 29 16:20:50.959307 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 29 16:20:50.959416 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 29 16:20:50.959490 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 16:20:50.959588 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 29 16:20:50.959659 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 29 16:20:50.959731 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 29 16:20:50.959796 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 29 16:20:50.959960 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 29 16:20:50.960030 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 16:20:50.960103 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 29 16:20:50.960184 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 29 16:20:50.960263 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 29 16:20:50.960331 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 29 16:20:50.960395 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 29 16:20:50.960459 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 29 16:20:50.960524 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 16:20:50.960592 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 29 16:20:50.960659 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 29 16:20:50.960730 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 29 16:20:50.960795 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 16:20:50.960945 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 29 16:20:50.961015 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 29 16:20:50.961079 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 29 16:20:50.961143 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 16:20:50.961232 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 16:20:50.961295 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 16:20:50.961359 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 16:20:50.961436 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 29 16:20:50.961498 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 29 16:20:50.961558 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 29 16:20:50.961626 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 29 16:20:50.961685 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 29 16:20:50.961748 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 29 16:20:50.961864 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 29 16:20:50.961955 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 29 16:20:50.962020 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 29 16:20:50.962094 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 29 16:20:50.962154 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 29 16:20:50.962263 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 29 16:20:50.962349 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 29 16:20:50.962435 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 29 16:20:50.962502 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 29 16:20:50.962571 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 29 16:20:50.962636 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 29 16:20:50.962696 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 29 16:20:50.962777 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 29 16:20:50.962905 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 29 16:20:50.962969 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 29 16:20:50.963041 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 29 16:20:50.963103 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 29 16:20:50.963181 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 29 16:20:50.963257 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 29 16:20:50.963317 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 29 16:20:50.963390 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 29 16:20:50.963401 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 16:20:50.963409 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 16:20:50.963417 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 16:20:50.963425 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 16:20:50.963435 kernel: iommu: Default domain type: Translated Jan 29 16:20:50.963443 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 16:20:50.963451 kernel: efivars: Registered efivars operations Jan 29 16:20:50.963458 kernel: vgaarb: loaded Jan 29 16:20:50.963468 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 16:20:50.963476 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 16:20:50.963484 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 16:20:50.963491 kernel: pnp: PnP ACPI init Jan 29 16:20:50.963574 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 16:20:50.963589 kernel: pnp: PnP ACPI: found 1 devices Jan 29 16:20:50.963597 kernel: NET: Registered PF_INET protocol family Jan 29 16:20:50.963605 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 16:20:50.963613 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 16:20:50.963621 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 16:20:50.963629 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 16:20:50.963636 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 16:20:50.963644 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 16:20:50.963653 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:20:50.963661 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 16:20:50.963668 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 16:20:50.963745 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 29 16:20:50.963756 kernel: PCI: CLS 0 bytes, default 64 Jan 29 16:20:50.963763 kernel: kvm [1]: HYP mode not available Jan 29 16:20:50.963771 kernel: Initialise system trusted keyrings Jan 29 16:20:50.963779 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 16:20:50.963786 kernel: Key type asymmetric registered Jan 29 16:20:50.963796 kernel: Asymmetric key parser 'x509' registered Jan 29 16:20:50.963803 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 16:20:50.963858 kernel: io scheduler mq-deadline registered Jan 29 16:20:50.963867 kernel: io scheduler kyber registered Jan 29 16:20:50.963874 kernel: io scheduler bfq registered Jan 29 16:20:50.963882 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 29 16:20:50.963965 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 29 16:20:50.964035 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 29 16:20:50.964103 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:20:50.964233 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 29 16:20:50.964310 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 29 16:20:50.964376 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:20:50.964454 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 29 16:20:50.964543 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 29 16:20:50.964622 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:20:50.964693 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 29 16:20:50.964765 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 29 16:20:50.966951 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:20:50.967059 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 29 16:20:50.967127 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 29 16:20:50.967258 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:20:50.967336 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 29 16:20:50.967404 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 29 16:20:50.967470 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:20:50.967538 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 29 16:20:50.967607 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 29 16:20:50.967680 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:20:50.967750 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 29 16:20:50.967855 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 29 16:20:50.967926 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:20:50.967938 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 29 16:20:50.968004 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 29 16:20:50.968074 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 29 16:20:50.968137 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 29 16:20:50.968148 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 16:20:50.968155 kernel: ACPI: button: Power Button [PWRB] Jan 29 16:20:50.968176 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 16:20:50.968259 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 29 16:20:50.968357 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 29 16:20:50.968370 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 16:20:50.968383 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 29 16:20:50.968465 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 29 16:20:50.968477 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 29 16:20:50.968484 kernel: thunder_xcv, ver 1.0 Jan 29 16:20:50.968492 kernel: thunder_bgx, ver 1.0 Jan 29 16:20:50.968499 kernel: nicpf, ver 1.0 Jan 29 16:20:50.968506 kernel: nicvf, ver 1.0 Jan 29 16:20:50.968590 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 16:20:50.968655 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T16:20:50 UTC (1738167650) Jan 29 16:20:50.968668 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 16:20:50.968676 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 16:20:50.968683 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 16:20:50.968691 kernel: watchdog: Hard watchdog permanently disabled Jan 29 16:20:50.968698 kernel: NET: Registered PF_INET6 protocol family Jan 29 16:20:50.968706 kernel: Segment Routing with IPv6 Jan 29 16:20:50.968713 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 16:20:50.968720 kernel: NET: Registered PF_PACKET protocol family Jan 29 16:20:50.968730 kernel: Key type dns_resolver registered Jan 29 16:20:50.968737 kernel: registered taskstats version 1 Jan 29 16:20:50.968745 kernel: Loading compiled-in X.509 certificates Jan 29 16:20:50.968753 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6aa2640fb67e4af9702410ddab8a5c8b9fc0d77b' Jan 29 16:20:50.968760 kernel: Key type .fscrypt registered Jan 29 16:20:50.968768 kernel: Key type fscrypt-provisioning registered Jan 29 16:20:50.968775 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 16:20:50.968782 kernel: ima: Allocated hash algorithm: sha1 Jan 29 16:20:50.968790 kernel: ima: No architecture policies found Jan 29 16:20:50.968799 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 16:20:50.968806 kernel: clk: Disabling unused clocks Jan 29 16:20:50.968860 kernel: Freeing unused kernel memory: 38336K Jan 29 16:20:50.968868 kernel: Run /init as init process Jan 29 16:20:50.968876 kernel: with arguments: Jan 29 16:20:50.968884 kernel: /init Jan 29 16:20:50.968891 kernel: with environment: Jan 29 16:20:50.968898 kernel: HOME=/ Jan 29 16:20:50.968905 kernel: TERM=linux Jan 29 16:20:50.968915 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 16:20:50.968924 systemd[1]: Successfully made /usr/ read-only. Jan 29 16:20:50.968936 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:20:50.968944 systemd[1]: Detected virtualization kvm. Jan 29 16:20:50.968952 systemd[1]: Detected architecture arm64. Jan 29 16:20:50.968959 systemd[1]: Running in initrd. Jan 29 16:20:50.968967 systemd[1]: No hostname configured, using default hostname. Jan 29 16:20:50.968977 systemd[1]: Hostname set to . Jan 29 16:20:50.968985 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:20:50.968993 systemd[1]: Queued start job for default target initrd.target. Jan 29 16:20:50.969003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:20:50.969012 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:20:50.969021 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 16:20:50.969029 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:20:50.969037 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 16:20:50.969047 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 16:20:50.969059 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 16:20:50.969069 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 16:20:50.969078 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:20:50.969087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:20:50.969097 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:20:50.969104 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:20:50.969114 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:20:50.969122 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:20:50.969131 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:20:50.969139 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:20:50.969147 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 16:20:50.969155 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 29 16:20:50.969177 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:20:50.969186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:20:50.969194 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:20:50.969204 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:20:50.969212 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 16:20:50.969221 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:20:50.969229 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 16:20:50.969237 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 16:20:50.969245 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:20:50.969253 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:20:50.969261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:50.969270 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 16:20:50.969279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:20:50.969287 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 16:20:50.969331 systemd-journald[238]: Collecting audit messages is disabled. Jan 29 16:20:50.969354 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 16:20:50.969363 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 16:20:50.969372 kernel: Bridge firewalling registered Jan 29 16:20:50.969379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:50.969388 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:50.969398 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:20:50.969406 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 16:20:50.969415 systemd-journald[238]: Journal started Jan 29 16:20:50.969435 systemd-journald[238]: Runtime Journal (/run/log/journal/7612cf72a96e4da6831226e77bbd8207) is 8M, max 76.6M, 68.6M free. Jan 29 16:20:50.970987 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:20:50.928061 systemd-modules-load[239]: Inserted module 'overlay' Jan 29 16:20:50.946895 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 29 16:20:50.977186 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:20:50.982892 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:20:50.990208 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:20:50.992041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:51.002082 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 16:20:51.005244 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:20:51.007262 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:20:51.022627 dracut-cmdline[270]: dracut-dracut-053 Jan 29 16:20:51.026797 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=efa7e6e1cc8b13b443d6366d9f999907439b0271fcbeecfeffa01ef11e4dc0ac Jan 29 16:20:51.030180 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:20:51.041117 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:20:51.074074 systemd-resolved[292]: Positive Trust Anchors: Jan 29 16:20:51.075057 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:20:51.076088 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:20:51.088132 systemd-resolved[292]: Defaulting to hostname 'linux'. Jan 29 16:20:51.090932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:20:51.092973 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:20:51.114897 kernel: SCSI subsystem initialized Jan 29 16:20:51.119878 kernel: Loading iSCSI transport class v2.0-870. Jan 29 16:20:51.127900 kernel: iscsi: registered transport (tcp) Jan 29 16:20:51.146535 kernel: iscsi: registered transport (qla4xxx) Jan 29 16:20:51.146626 kernel: QLogic iSCSI HBA Driver Jan 29 16:20:51.197324 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 16:20:51.203124 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 16:20:51.224958 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 16:20:51.225028 kernel: device-mapper: uevent: version 1.0.3 Jan 29 16:20:51.225040 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 16:20:51.283893 kernel: raid6: neonx8 gen() 15124 MB/s Jan 29 16:20:51.302225 kernel: raid6: neonx4 gen() 15591 MB/s Jan 29 16:20:51.317882 kernel: raid6: neonx2 gen() 12987 MB/s Jan 29 16:20:51.334900 kernel: raid6: neonx1 gen() 10141 MB/s Jan 29 16:20:51.351882 kernel: raid6: int64x8 gen() 6666 MB/s Jan 29 16:20:51.368890 kernel: raid6: int64x4 gen() 7205 MB/s Jan 29 16:20:51.385879 kernel: raid6: int64x2 gen() 6002 MB/s Jan 29 16:20:51.402901 kernel: raid6: int64x1 gen() 4980 MB/s Jan 29 16:20:51.402990 kernel: raid6: using algorithm neonx4 gen() 15591 MB/s Jan 29 16:20:51.420761 kernel: raid6: .... xor() 12184 MB/s, rmw enabled Jan 29 16:20:51.420871 kernel: raid6: using neon recovery algorithm Jan 29 16:20:51.429185 kernel: xor: measuring software checksum speed Jan 29 16:20:51.429264 kernel: 8regs : 20562 MB/sec Jan 29 16:20:51.429289 kernel: 32regs : 21710 MB/sec Jan 29 16:20:51.429301 kernel: arm64_neon : 2764 MB/sec Jan 29 16:20:51.430927 kernel: xor: using function: 32regs (21710 MB/sec) Jan 29 16:20:51.486905 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 16:20:51.503246 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:20:51.509081 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:20:51.527247 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jan 29 16:20:51.532726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:20:51.543739 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 16:20:51.560175 dracut-pre-trigger[459]: rd.md=0: removing MD RAID activation Jan 29 16:20:51.605934 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:20:51.613089 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:20:51.675885 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:20:51.687326 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 16:20:51.707303 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 16:20:51.711343 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:20:51.713599 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:20:51.714984 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:20:51.723039 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 16:20:51.736803 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:20:51.777950 kernel: scsi host0: Virtio SCSI HBA Jan 29 16:20:51.789507 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 16:20:51.789602 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 29 16:20:51.841221 kernel: ACPI: bus type USB registered Jan 29 16:20:51.841294 kernel: usbcore: registered new interface driver usbfs Jan 29 16:20:51.841307 kernel: usbcore: registered new interface driver hub Jan 29 16:20:51.841318 kernel: usbcore: registered new device driver usb Jan 29 16:20:51.866394 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 29 16:20:51.872828 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 29 16:20:51.872996 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 16:20:51.873021 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:20:51.904507 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 29 16:20:51.904767 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 29 16:20:51.904905 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 29 16:20:51.905081 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 29 16:20:51.905270 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 29 16:20:51.905363 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 29 16:20:51.905447 kernel: hub 1-0:1.0: USB hub found Jan 29 16:20:51.905591 kernel: hub 1-0:1.0: 4 ports detected Jan 29 16:20:51.905734 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 29 16:20:51.905898 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 29 16:20:51.912450 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 29 16:20:51.912576 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 29 16:20:51.912660 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 29 16:20:51.912753 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 16:20:51.912876 kernel: hub 2-0:1.0: USB hub found Jan 29 16:20:51.912987 kernel: hub 2-0:1.0: 4 ports detected Jan 29 16:20:51.913069 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 16:20:51.913082 kernel: GPT:17805311 != 80003071 Jan 29 16:20:51.913093 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 16:20:51.913102 kernel: GPT:17805311 != 80003071 Jan 29 16:20:51.913112 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 16:20:51.913125 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:20:51.913134 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 29 16:20:51.864864 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:20:51.864999 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:51.866979 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:51.867731 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:20:51.868013 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:51.869741 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:51.879252 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:20:51.900970 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:51.909114 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 16:20:51.951554 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:51.999850 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (510) Jan 29 16:20:52.018953 kernel: BTRFS: device fsid d7b4a0ef-7a03-4a6c-8f31-7cafae04447a devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (500) Jan 29 16:20:52.027235 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 29 16:20:52.041619 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 29 16:20:52.051034 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:20:52.058046 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 29 16:20:52.058867 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 29 16:20:52.066112 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 16:20:52.095298 disk-uuid[575]: Primary Header is updated. Jan 29 16:20:52.095298 disk-uuid[575]: Secondary Entries is updated. Jan 29 16:20:52.095298 disk-uuid[575]: Secondary Header is updated. Jan 29 16:20:52.106542 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:20:52.135943 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 29 16:20:52.380002 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 29 16:20:52.516277 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 29 16:20:52.516359 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 29 16:20:52.517645 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 29 16:20:52.572222 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 29 16:20:52.572511 kernel: usbcore: registered new interface driver usbhid Jan 29 16:20:52.572528 kernel: usbhid: USB HID core driver Jan 29 16:20:53.122284 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 16:20:53.122349 disk-uuid[576]: The operation has completed successfully. Jan 29 16:20:53.199990 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 16:20:53.200122 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 16:20:53.223347 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 16:20:53.239009 sh[590]: Success Jan 29 16:20:53.252005 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 16:20:53.323761 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 16:20:53.335122 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 16:20:53.337522 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 16:20:53.360272 kernel: BTRFS info (device dm-0): first mount of filesystem d7b4a0ef-7a03-4a6c-8f31-7cafae04447a Jan 29 16:20:53.360351 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:20:53.360366 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 16:20:53.360379 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 16:20:53.361131 kernel: BTRFS info (device dm-0): using free space tree Jan 29 16:20:53.367866 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 16:20:53.369795 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 16:20:53.371013 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 16:20:53.378094 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 16:20:53.382147 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 16:20:53.406055 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:20:53.406114 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:20:53.407259 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:20:53.411216 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:20:53.411281 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:20:53.425157 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 16:20:53.426077 kernel: BTRFS info (device sda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:20:53.433962 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 16:20:53.441075 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 16:20:53.507162 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:20:53.517084 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:20:53.546718 systemd-networkd[777]: lo: Link UP Jan 29 16:20:53.547765 systemd-networkd[777]: lo: Gained carrier Jan 29 16:20:53.551647 systemd-networkd[777]: Enumeration completed Jan 29 16:20:53.552263 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:53.552267 systemd-networkd[777]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:20:53.553381 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:53.556415 ignition[707]: Ignition 2.20.0 Jan 29 16:20:53.553385 systemd-networkd[777]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:20:53.556424 ignition[707]: Stage: fetch-offline Jan 29 16:20:53.553799 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:20:53.556473 ignition[707]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:53.555015 systemd-networkd[777]: eth0: Link UP Jan 29 16:20:53.556481 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:20:53.555019 systemd-networkd[777]: eth0: Gained carrier Jan 29 16:20:53.556660 ignition[707]: parsed url from cmdline: "" Jan 29 16:20:53.555029 systemd-networkd[777]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:53.556663 ignition[707]: no config URL provided Jan 29 16:20:53.558269 systemd-networkd[777]: eth1: Link UP Jan 29 16:20:53.556667 ignition[707]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:20:53.558273 systemd-networkd[777]: eth1: Gained carrier Jan 29 16:20:53.556675 ignition[707]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:20:53.558284 systemd-networkd[777]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:20:53.556680 ignition[707]: failed to fetch config: resource requires networking Jan 29 16:20:53.559768 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:20:53.556928 ignition[707]: Ignition finished successfully Jan 29 16:20:53.561471 systemd[1]: Reached target network.target - Network. Jan 29 16:20:53.569169 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 16:20:53.586954 systemd-networkd[777]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:20:53.595985 ignition[784]: Ignition 2.20.0 Jan 29 16:20:53.595996 ignition[784]: Stage: fetch Jan 29 16:20:53.596241 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:53.596253 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:20:53.596366 ignition[784]: parsed url from cmdline: "" Jan 29 16:20:53.596370 ignition[784]: no config URL provided Jan 29 16:20:53.596374 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 16:20:53.596382 ignition[784]: no config at "/usr/lib/ignition/user.ign" Jan 29 16:20:53.596471 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 29 16:20:53.597395 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 29 16:20:53.612073 systemd-networkd[777]: eth0: DHCPv4 address 167.235.198.80/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:20:53.797683 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 29 16:20:53.806295 ignition[784]: GET result: OK Jan 29 16:20:53.806435 ignition[784]: parsing config with SHA512: 242212e81279a0d57d71f56ec9d2b1084fbef6f238b4ec6b1c242e98bc906cb3c2c21e6fb6fd341d27a99c265072ebe9d04e72117452c6a0d6941d6aad65f77a Jan 29 16:20:53.813563 unknown[784]: fetched base config from "system" Jan 29 16:20:53.813573 unknown[784]: fetched base config from "system" Jan 29 16:20:53.813974 ignition[784]: fetch: fetch complete Jan 29 16:20:53.813578 unknown[784]: fetched user config from "hetzner" Jan 29 16:20:53.813979 ignition[784]: fetch: fetch passed Jan 29 16:20:53.816161 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 16:20:53.814030 ignition[784]: Ignition finished successfully Jan 29 16:20:53.822788 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 16:20:53.840704 ignition[791]: Ignition 2.20.0 Jan 29 16:20:53.840715 ignition[791]: Stage: kargs Jan 29 16:20:53.840949 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:53.840959 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:20:53.842063 ignition[791]: kargs: kargs passed Jan 29 16:20:53.845287 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 16:20:53.842134 ignition[791]: Ignition finished successfully Jan 29 16:20:53.851183 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 16:20:53.868892 ignition[798]: Ignition 2.20.0 Jan 29 16:20:53.868906 ignition[798]: Stage: disks Jan 29 16:20:53.869219 ignition[798]: no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:53.869231 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:20:53.872911 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 16:20:53.870423 ignition[798]: disks: disks passed Jan 29 16:20:53.874107 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 16:20:53.870485 ignition[798]: Ignition finished successfully Jan 29 16:20:53.875219 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 16:20:53.876484 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:20:53.877735 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:20:53.878820 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:20:53.885093 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 16:20:53.909057 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 16:20:53.915342 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 16:20:54.374088 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 16:20:54.441248 kernel: EXT4-fs (sda9): mounted filesystem 41c89329-6889-4dd8-82a1-efe68f55bab8 r/w with ordered data mode. Quota mode: none. Jan 29 16:20:54.441715 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 16:20:54.443052 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 16:20:54.452035 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:20:54.457079 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 16:20:54.467211 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 16:20:54.472159 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 16:20:54.472207 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:20:54.476410 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 16:20:54.484550 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 16:20:54.489986 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (814) Jan 29 16:20:54.490025 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:20:54.490036 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:20:54.490232 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:20:54.500069 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:20:54.500153 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:20:54.505986 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:20:54.543918 coreos-metadata[816]: Jan 29 16:20:54.543 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 29 16:20:54.546791 coreos-metadata[816]: Jan 29 16:20:54.546 INFO Fetch successful Jan 29 16:20:54.549851 coreos-metadata[816]: Jan 29 16:20:54.548 INFO wrote hostname ci-4230-0-0-e-139a7b6c18 to /sysroot/etc/hostname Jan 29 16:20:54.550849 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 16:20:54.553606 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:20:54.561737 initrd-setup-root[850]: cut: /sysroot/etc/group: No such file or directory Jan 29 16:20:54.573414 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 16:20:54.581777 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 16:20:54.711991 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 16:20:54.719004 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 16:20:54.722248 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 16:20:54.733043 kernel: BTRFS info (device sda6): last unmount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:20:54.759349 ignition[931]: INFO : Ignition 2.20.0 Jan 29 16:20:54.759349 ignition[931]: INFO : Stage: mount Jan 29 16:20:54.761659 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:54.761659 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:20:54.761659 ignition[931]: INFO : mount: mount passed Jan 29 16:20:54.761659 ignition[931]: INFO : Ignition finished successfully Jan 29 16:20:54.763892 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 16:20:54.773082 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 16:20:54.774243 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 16:20:55.257058 systemd-networkd[777]: eth0: Gained IPv6LL Jan 29 16:20:55.321309 systemd-networkd[777]: eth1: Gained IPv6LL Jan 29 16:20:55.361517 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 16:20:55.368109 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 16:20:55.381176 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (944) Jan 29 16:20:55.383115 kernel: BTRFS info (device sda6): first mount of filesystem c42147cd-4375-422a-9f40-8bdefff824e9 Jan 29 16:20:55.383169 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 16:20:55.383180 kernel: BTRFS info (device sda6): using free space tree Jan 29 16:20:55.386844 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 16:20:55.386912 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 16:20:55.389634 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 16:20:55.409970 ignition[961]: INFO : Ignition 2.20.0 Jan 29 16:20:55.409970 ignition[961]: INFO : Stage: files Jan 29 16:20:55.412063 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:55.412063 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:20:55.412063 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jan 29 16:20:55.415017 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 16:20:55.415017 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 16:20:55.417041 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 16:20:55.417041 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 16:20:55.417041 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 16:20:55.416490 unknown[961]: wrote ssh authorized keys file for user: core Jan 29 16:20:55.420476 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 16:20:55.420476 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 29 16:20:55.774024 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 16:20:57.281762 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 29 16:20:57.281762 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 16:20:57.286687 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 16:20:57.619863 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 16:20:58.104932 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 16:20:58.104932 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 16:20:58.108039 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:20:58.108039 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 16:20:58.108039 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 16:20:58.108039 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 29 16:20:58.118258 ignition[961]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:20:58.118258 ignition[961]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 29 16:20:58.118258 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 29 16:20:58.118258 ignition[961]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 29 16:20:58.118258 ignition[961]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 16:20:58.118258 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:20:58.118258 ignition[961]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 16:20:58.118258 ignition[961]: INFO : files: files passed Jan 29 16:20:58.118258 ignition[961]: INFO : Ignition finished successfully Jan 29 16:20:58.113299 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 16:20:58.126046 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 16:20:58.130161 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 16:20:58.133597 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 16:20:58.134878 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 16:20:58.160698 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:20:58.160698 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:20:58.163685 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 16:20:58.166521 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:20:58.168345 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 16:20:58.176876 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 16:20:58.217461 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 16:20:58.218479 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 16:20:58.220619 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 16:20:58.221570 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 16:20:58.222621 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 16:20:58.230224 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 16:20:58.247419 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:20:58.253166 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 16:20:58.279105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:20:58.279902 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:20:58.281298 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 16:20:58.282398 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 16:20:58.282544 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 16:20:58.284185 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 16:20:58.284879 systemd[1]: Stopped target basic.target - Basic System. Jan 29 16:20:58.286004 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 16:20:58.287123 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 16:20:58.288188 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 16:20:58.289307 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 16:20:58.290497 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 16:20:58.291838 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 16:20:58.292897 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 16:20:58.294101 systemd[1]: Stopped target swap.target - Swaps. Jan 29 16:20:58.295213 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 16:20:58.295367 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 16:20:58.296955 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:20:58.297669 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:20:58.298869 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 16:20:58.299353 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:20:58.300121 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 16:20:58.300273 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 16:20:58.301707 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 16:20:58.301855 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 16:20:58.303113 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 16:20:58.303235 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 16:20:58.304793 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 16:20:58.304918 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 16:20:58.315127 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 16:20:58.315663 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 16:20:58.315834 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:20:58.324217 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 16:20:58.324733 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 16:20:58.324918 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:20:58.327175 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 16:20:58.327299 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 16:20:58.337415 ignition[1014]: INFO : Ignition 2.20.0 Jan 29 16:20:58.337415 ignition[1014]: INFO : Stage: umount Jan 29 16:20:58.342461 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 16:20:58.342461 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 29 16:20:58.342461 ignition[1014]: INFO : umount: umount passed Jan 29 16:20:58.342461 ignition[1014]: INFO : Ignition finished successfully Jan 29 16:20:58.339671 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 16:20:58.339772 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 16:20:58.342898 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 16:20:58.343017 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 16:20:58.350320 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 16:20:58.350382 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 16:20:58.351470 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 16:20:58.351516 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 16:20:58.353108 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 16:20:58.353161 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 16:20:58.354090 systemd[1]: Stopped target network.target - Network. Jan 29 16:20:58.357017 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 16:20:58.357169 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 16:20:58.359292 systemd[1]: Stopped target paths.target - Path Units. Jan 29 16:20:58.361276 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 16:20:58.365439 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:20:58.366426 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 16:20:58.367581 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 16:20:58.368334 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 16:20:58.368386 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 16:20:58.369023 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 16:20:58.369099 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 16:20:58.370146 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 16:20:58.370210 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 16:20:58.371211 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 16:20:58.371254 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 16:20:58.372528 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 16:20:58.373671 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 16:20:58.377044 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 16:20:58.377771 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 16:20:58.379856 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 16:20:58.381150 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 16:20:58.381264 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 16:20:58.382843 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 16:20:58.382984 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 16:20:58.386733 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 29 16:20:58.387435 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 16:20:58.387541 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:20:58.391226 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 29 16:20:58.391529 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 16:20:58.391641 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 16:20:58.394442 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 29 16:20:58.395214 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 16:20:58.395287 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:20:58.403044 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 16:20:58.403587 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 16:20:58.403667 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 16:20:58.406071 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 16:20:58.406138 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:20:58.406863 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 16:20:58.406907 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 16:20:58.407759 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:20:58.412719 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 29 16:20:58.425912 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 16:20:58.426099 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 16:20:58.433128 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 16:20:58.433468 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:20:58.435679 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 16:20:58.435744 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 16:20:58.437711 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 16:20:58.437779 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:20:58.439466 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 16:20:58.439543 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 16:20:58.440939 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 16:20:58.440997 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 16:20:58.442450 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 16:20:58.442512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 16:20:58.448291 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 16:20:58.448893 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 16:20:58.448973 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:20:58.455016 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:20:58.455112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:20:58.461580 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 16:20:58.461720 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 16:20:58.463768 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 16:20:58.474548 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 16:20:58.486370 systemd[1]: Switching root. Jan 29 16:20:58.544378 systemd-journald[238]: Journal stopped Jan 29 16:20:59.696921 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 29 16:20:59.696993 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 16:20:59.697006 kernel: SELinux: policy capability open_perms=1 Jan 29 16:20:59.697017 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 16:20:59.697030 kernel: SELinux: policy capability always_check_network=0 Jan 29 16:20:59.697059 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 16:20:59.697070 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 16:20:59.697079 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 16:20:59.697094 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 16:20:59.697103 kernel: audit: type=1403 audit(1738167658.727:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 16:20:59.697114 systemd[1]: Successfully loaded SELinux policy in 41.347ms. Jan 29 16:20:59.697138 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.968ms. Jan 29 16:20:59.697154 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 29 16:20:59.697165 systemd[1]: Detected virtualization kvm. Jan 29 16:20:59.697175 systemd[1]: Detected architecture arm64. Jan 29 16:20:59.697185 systemd[1]: Detected first boot. Jan 29 16:20:59.697195 systemd[1]: Hostname set to . Jan 29 16:20:59.697205 systemd[1]: Initializing machine ID from VM UUID. Jan 29 16:20:59.697215 kernel: NET: Registered PF_VSOCK protocol family Jan 29 16:20:59.697224 zram_generator::config[1058]: No configuration found. Jan 29 16:20:59.697237 systemd[1]: Populated /etc with preset unit settings. Jan 29 16:20:59.697248 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 29 16:20:59.697258 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 16:20:59.697268 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 16:20:59.697278 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 16:20:59.697292 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 16:20:59.697304 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 16:20:59.697316 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 16:20:59.697328 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 16:20:59.697342 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 16:20:59.697354 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 16:20:59.697366 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 16:20:59.697376 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 16:20:59.697387 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 16:20:59.697398 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 16:20:59.697408 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 16:20:59.697418 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 16:20:59.697429 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 16:20:59.697440 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 16:20:59.697451 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 16:20:59.697461 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 16:20:59.697472 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 16:20:59.697482 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 16:20:59.697493 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 16:20:59.697504 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 16:20:59.697514 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 16:20:59.697528 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 16:20:59.697538 systemd[1]: Reached target slices.target - Slice Units. Jan 29 16:20:59.697548 systemd[1]: Reached target swap.target - Swaps. Jan 29 16:20:59.697558 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 16:20:59.697568 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 16:20:59.697578 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 29 16:20:59.697588 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 16:20:59.697600 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 16:20:59.697610 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 16:20:59.697620 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 16:20:59.697631 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 16:20:59.697646 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 16:20:59.697660 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 16:20:59.697674 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 16:20:59.697685 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 16:20:59.697695 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 16:20:59.697706 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 16:20:59.697716 systemd[1]: Reached target machines.target - Containers. Jan 29 16:20:59.697726 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 16:20:59.697737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:20:59.697747 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 16:20:59.697758 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 16:20:59.697769 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:20:59.697779 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:20:59.697789 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:20:59.697800 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 16:20:59.700552 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:20:59.700618 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 16:20:59.700632 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 16:20:59.700650 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 16:20:59.700661 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 16:20:59.700672 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 16:20:59.700687 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:20:59.700699 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 16:20:59.700710 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 16:20:59.700725 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 16:20:59.700739 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 16:20:59.700751 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 29 16:20:59.700762 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 16:20:59.700776 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 16:20:59.700788 systemd[1]: Stopped verity-setup.service. Jan 29 16:20:59.700801 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 16:20:59.700831 kernel: loop: module loaded Jan 29 16:20:59.700851 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 16:20:59.700862 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 16:20:59.700876 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 16:20:59.700888 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 16:20:59.700899 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 16:20:59.700912 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 16:20:59.700923 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 16:20:59.700933 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 16:20:59.700943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:20:59.700957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:20:59.700970 kernel: fuse: init (API version 7.39) Jan 29 16:20:59.700981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:20:59.700994 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:20:59.701005 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 16:20:59.701020 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 16:20:59.701032 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:20:59.701092 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:20:59.701107 kernel: ACPI: bus type drm_connector registered Jan 29 16:20:59.701118 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 16:20:59.701132 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:20:59.701144 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:20:59.701156 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 16:20:59.701169 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 16:20:59.701185 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 16:20:59.701197 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 16:20:59.701210 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 16:20:59.701223 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 29 16:20:59.701283 systemd-journald[1129]: Collecting audit messages is disabled. Jan 29 16:20:59.701321 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 16:20:59.701335 systemd-journald[1129]: Journal started Jan 29 16:20:59.701359 systemd-journald[1129]: Runtime Journal (/run/log/journal/7612cf72a96e4da6831226e77bbd8207) is 8M, max 76.6M, 68.6M free. Jan 29 16:20:59.366911 systemd[1]: Queued start job for default target multi-user.target. Jan 29 16:20:59.379725 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 16:20:59.380277 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 16:20:59.707850 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 16:20:59.709948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:20:59.720966 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 16:20:59.721105 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:20:59.730847 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 16:20:59.733899 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:20:59.739148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 16:20:59.751944 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 16:20:59.760875 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 16:20:59.761745 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 16:20:59.763100 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 16:20:59.764957 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 29 16:20:59.766461 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 16:20:59.767888 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 16:20:59.769188 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 16:20:59.772398 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 16:20:59.789975 kernel: loop0: detected capacity change from 0 to 194096 Jan 29 16:20:59.794493 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 16:20:59.795645 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 16:20:59.805090 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 16:20:59.811154 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 29 16:20:59.817500 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 16:20:59.832846 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 16:20:59.842408 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 16:20:59.853975 systemd-journald[1129]: Time spent on flushing to /var/log/journal/7612cf72a96e4da6831226e77bbd8207 is 85.877ms for 1143 entries. Jan 29 16:20:59.853975 systemd-journald[1129]: System Journal (/var/log/journal/7612cf72a96e4da6831226e77bbd8207) is 8M, max 584.8M, 576.8M free. Jan 29 16:20:59.956780 systemd-journald[1129]: Received client request to flush runtime journal. Jan 29 16:20:59.956866 kernel: loop1: detected capacity change from 0 to 8 Jan 29 16:20:59.956883 kernel: loop2: detected capacity change from 0 to 123192 Jan 29 16:20:59.882696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 16:20:59.895372 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 16:20:59.907028 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 29 16:20:59.927749 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 16:20:59.941256 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 16:20:59.951943 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 16:20:59.964869 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 16:20:59.990853 kernel: loop3: detected capacity change from 0 to 113512 Jan 29 16:21:00.005363 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 29 16:21:00.007904 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 29 16:21:00.021191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 16:21:00.057863 kernel: loop4: detected capacity change from 0 to 194096 Jan 29 16:21:00.085565 kernel: loop5: detected capacity change from 0 to 8 Jan 29 16:21:00.090854 kernel: loop6: detected capacity change from 0 to 123192 Jan 29 16:21:00.106896 kernel: loop7: detected capacity change from 0 to 113512 Jan 29 16:21:00.123886 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 29 16:21:00.124419 (sd-merge)[1204]: Merged extensions into '/usr'. Jan 29 16:21:00.132533 systemd[1]: Reload requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 16:21:00.132555 systemd[1]: Reloading... Jan 29 16:21:00.255298 zram_generator::config[1232]: No configuration found. Jan 29 16:21:00.419943 ldconfig[1155]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 16:21:00.448298 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:21:00.521281 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 16:21:00.522046 systemd[1]: Reloading finished in 389 ms. Jan 29 16:21:00.541866 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 16:21:00.543271 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 16:21:00.556231 systemd[1]: Starting ensure-sysext.service... Jan 29 16:21:00.570119 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 16:21:00.597624 systemd[1]: Reload requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... Jan 29 16:21:00.597644 systemd[1]: Reloading... Jan 29 16:21:00.604523 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 16:21:00.607679 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 16:21:00.608668 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 16:21:00.613552 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jan 29 16:21:00.615413 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jan 29 16:21:00.620581 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:21:00.620595 systemd-tmpfiles[1270]: Skipping /boot Jan 29 16:21:00.636708 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 16:21:00.636734 systemd-tmpfiles[1270]: Skipping /boot Jan 29 16:21:00.688847 zram_generator::config[1296]: No configuration found. Jan 29 16:21:00.823794 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:21:00.896350 systemd[1]: Reloading finished in 298 ms. Jan 29 16:21:00.911239 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 16:21:00.912892 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 16:21:00.951712 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:21:00.958204 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 16:21:00.962983 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 16:21:00.970993 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 16:21:00.977929 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 16:21:00.984208 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 16:21:00.990573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:21:00.995368 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:21:01.005740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:21:01.011354 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:21:01.014065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:21:01.014234 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:21:01.018275 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 16:21:01.028522 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:21:01.029445 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:21:01.029531 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:21:01.032943 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 16:21:01.047998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:21:01.055127 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 16:21:01.057168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:21:01.057338 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:21:01.059523 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:21:01.059723 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:21:01.068959 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 16:21:01.071740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:21:01.072567 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:21:01.096943 systemd[1]: Finished ensure-sysext.service. Jan 29 16:21:01.098376 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:21:01.099010 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:21:01.104762 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:21:01.106581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:21:01.116336 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 16:21:01.128170 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 16:21:01.129264 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 16:21:01.130921 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 16:21:01.140794 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 16:21:01.168006 augenrules[1381]: No rules Jan 29 16:21:01.169968 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:21:01.170264 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:21:01.175653 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 16:21:01.177302 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:21:01.178945 systemd-udevd[1348]: Using default interface naming scheme 'v255'. Jan 29 16:21:01.190670 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 16:21:01.255553 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 16:21:01.263541 systemd-resolved[1342]: Positive Trust Anchors: Jan 29 16:21:01.263581 systemd-resolved[1342]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 16:21:01.263614 systemd-resolved[1342]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 16:21:01.269802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 16:21:01.273122 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 16:21:01.273971 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 16:21:01.285755 systemd-resolved[1342]: Using system hostname 'ci-4230-0-0-e-139a7b6c18'. Jan 29 16:21:01.292262 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 16:21:01.293101 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 16:21:01.382540 systemd-networkd[1393]: lo: Link UP Jan 29 16:21:01.382558 systemd-networkd[1393]: lo: Gained carrier Jan 29 16:21:01.402471 systemd-networkd[1393]: Enumeration completed Jan 29 16:21:01.402610 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 16:21:01.403379 systemd[1]: Reached target network.target - Network. Jan 29 16:21:01.413649 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 29 16:21:01.419537 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 16:21:01.422543 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 16:21:01.454303 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 29 16:21:01.546848 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1395) Jan 29 16:21:01.575688 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:21:01.575706 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:21:01.578967 systemd-networkd[1393]: eth0: Link UP Jan 29 16:21:01.578975 systemd-networkd[1393]: eth0: Gained carrier Jan 29 16:21:01.578999 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:21:01.596528 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:21:01.596549 systemd-networkd[1393]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 16:21:01.598192 systemd-networkd[1393]: eth1: Link UP Jan 29 16:21:01.598205 systemd-networkd[1393]: eth1: Gained carrier Jan 29 16:21:01.598276 systemd-networkd[1393]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 16:21:01.625971 systemd-networkd[1393]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 16:21:01.627262 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 29 16:21:01.632067 systemd-networkd[1393]: eth0: DHCPv4 address 167.235.198.80/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 29 16:21:01.632615 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Jan 29 16:21:01.632723 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Jan 29 16:21:01.632902 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Jan 29 16:21:01.635186 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 16:21:01.664711 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 16:21:01.680430 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 29 16:21:01.680645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 16:21:01.686743 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 16:21:01.686992 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 16:21:01.692176 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 16:21:01.695519 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 29 16:21:01.695584 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 16:21:01.695597 kernel: [drm] features: -context_init Jan 29 16:21:01.699294 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 16:21:01.699980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 16:21:01.700041 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 29 16:21:01.700065 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 16:21:01.704392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 16:21:01.704906 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 16:21:01.710124 kernel: [drm] number of scanouts: 1 Jan 29 16:21:01.710199 kernel: [drm] number of cap sets: 0 Jan 29 16:21:01.710333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 16:21:01.711530 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 16:21:01.711880 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 29 16:21:01.713412 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 16:21:01.717882 kernel: Console: switching to colour frame buffer device 160x50 Jan 29 16:21:01.731265 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 16:21:01.731590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 16:21:01.732694 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 16:21:01.735100 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 16:21:01.781519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:21:01.789876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 16:21:01.790140 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:21:01.805243 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 16:21:01.903997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 16:21:01.939487 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 16:21:01.947151 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 16:21:01.971848 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:21:02.003226 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 16:21:02.005420 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 16:21:02.008118 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 16:21:02.009584 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 16:21:02.011181 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 16:21:02.012350 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 16:21:02.013262 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 16:21:02.014048 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 16:21:02.014678 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 16:21:02.014724 systemd[1]: Reached target paths.target - Path Units. Jan 29 16:21:02.015311 systemd[1]: Reached target timers.target - Timer Units. Jan 29 16:21:02.017907 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 16:21:02.020614 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 16:21:02.024700 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 29 16:21:02.025895 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 29 16:21:02.026670 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 29 16:21:02.039872 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 16:21:02.042544 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 29 16:21:02.051760 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 16:21:02.053420 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 16:21:02.054508 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 16:21:02.055278 systemd[1]: Reached target basic.target - Basic System. Jan 29 16:21:02.056091 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:21:02.056127 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 16:21:02.065043 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 16:21:02.067034 lvm[1461]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 16:21:02.070164 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 16:21:02.075245 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 16:21:02.085912 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 16:21:02.090401 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 16:21:02.091023 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 16:21:02.097195 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 16:21:02.102365 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 16:21:02.108199 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 29 16:21:02.110246 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 16:21:02.116096 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 16:21:02.116391 jq[1465]: false Jan 29 16:21:02.122041 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 16:21:02.126102 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 16:21:02.126686 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 16:21:02.127653 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 16:21:02.131996 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 16:21:02.133582 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 16:21:02.135606 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 16:21:02.136108 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 16:21:02.147710 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 16:21:02.149938 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 16:21:02.152292 dbus-daemon[1464]: [system] SELinux support is enabled Jan 29 16:21:02.152515 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 16:21:02.159233 extend-filesystems[1466]: Found loop4 Jan 29 16:21:02.159233 extend-filesystems[1466]: Found loop5 Jan 29 16:21:02.159233 extend-filesystems[1466]: Found loop6 Jan 29 16:21:02.159233 extend-filesystems[1466]: Found loop7 Jan 29 16:21:02.159233 extend-filesystems[1466]: Found sda Jan 29 16:21:02.159233 extend-filesystems[1466]: Found sda1 Jan 29 16:21:02.159233 extend-filesystems[1466]: Found sda2 Jan 29 16:21:02.159233 extend-filesystems[1466]: Found sda3 Jan 29 16:21:02.159233 extend-filesystems[1466]: Found usr Jan 29 16:21:02.159233 extend-filesystems[1466]: Found sda4 Jan 29 16:21:02.159233 extend-filesystems[1466]: Found sda6 Jan 29 16:21:02.158455 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 16:21:02.189317 extend-filesystems[1466]: Found sda7 Jan 29 16:21:02.189317 extend-filesystems[1466]: Found sda9 Jan 29 16:21:02.189317 extend-filesystems[1466]: Checking size of /dev/sda9 Jan 29 16:21:02.158492 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 16:21:02.161085 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 16:21:02.161110 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 16:21:02.193696 coreos-metadata[1463]: Jan 29 16:21:02.192 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 29 16:21:02.194955 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 16:21:02.195457 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 16:21:02.197334 coreos-metadata[1463]: Jan 29 16:21:02.197 INFO Fetch successful Jan 29 16:21:02.200447 coreos-metadata[1463]: Jan 29 16:21:02.200 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 29 16:21:02.207857 coreos-metadata[1463]: Jan 29 16:21:02.202 INFO Fetch successful Jan 29 16:21:02.209473 jq[1478]: true Jan 29 16:21:02.216293 extend-filesystems[1466]: Resized partition /dev/sda9 Jan 29 16:21:02.235721 extend-filesystems[1507]: resize2fs 1.47.1 (20-May-2024) Jan 29 16:21:02.240914 tar[1492]: linux-arm64/helm Jan 29 16:21:02.245160 (ntainerd)[1498]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 16:21:02.250378 update_engine[1477]: I20250129 16:21:02.250203 1477 main.cc:92] Flatcar Update Engine starting Jan 29 16:21:02.254433 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 29 16:21:02.254536 systemd[1]: Started update-engine.service - Update Engine. Jan 29 16:21:02.254908 update_engine[1477]: I20250129 16:21:02.254645 1477 update_check_scheduler.cc:74] Next update check in 4m40s Jan 29 16:21:02.259295 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 16:21:02.264320 jq[1504]: true Jan 29 16:21:02.371826 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 16:21:02.372667 systemd-logind[1475]: New seat seat0. Jan 29 16:21:02.378781 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1398) Jan 29 16:21:02.384413 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 16:21:02.384442 systemd-logind[1475]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 29 16:21:02.384674 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 16:21:02.446978 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 16:21:02.448273 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 16:21:02.459963 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:21:02.465620 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 16:21:02.482378 systemd[1]: Starting sshkeys.service... Jan 29 16:21:02.507487 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 16:21:02.519073 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 16:21:02.527419 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 16:21:02.551878 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 29 16:21:02.581667 coreos-metadata[1546]: Jan 29 16:21:02.581 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 29 16:21:02.584295 coreos-metadata[1546]: Jan 29 16:21:02.583 INFO Fetch successful Jan 29 16:21:02.584428 extend-filesystems[1507]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 16:21:02.584428 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 29 16:21:02.584428 extend-filesystems[1507]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 29 16:21:02.596105 extend-filesystems[1466]: Resized filesystem in /dev/sda9 Jan 29 16:21:02.596105 extend-filesystems[1466]: Found sr0 Jan 29 16:21:02.585685 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 16:21:02.586290 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 16:21:02.595921 unknown[1546]: wrote ssh authorized keys file for user: core Jan 29 16:21:02.641890 update-ssh-keys[1552]: Updated "/home/core/.ssh/authorized_keys" Jan 29 16:21:02.642920 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 16:21:02.649717 systemd[1]: Finished sshkeys.service. Jan 29 16:21:02.753156 containerd[1498]: time="2025-01-29T16:21:02.751957040Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 16:21:02.808233 containerd[1498]: time="2025-01-29T16:21:02.808174360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:02.815882 containerd[1498]: time="2025-01-29T16:21:02.813964960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:02.815882 containerd[1498]: time="2025-01-29T16:21:02.814039840Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 16:21:02.815882 containerd[1498]: time="2025-01-29T16:21:02.814065920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 16:21:02.815882 containerd[1498]: time="2025-01-29T16:21:02.814308200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 16:21:02.815882 containerd[1498]: time="2025-01-29T16:21:02.814331360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:02.815882 containerd[1498]: time="2025-01-29T16:21:02.814413600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:02.815882 containerd[1498]: time="2025-01-29T16:21:02.814430920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:02.815882 containerd[1498]: time="2025-01-29T16:21:02.814692440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:02.815882 containerd[1498]: time="2025-01-29T16:21:02.814711840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:02.815882 containerd[1498]: time="2025-01-29T16:21:02.814731320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:02.815882 containerd[1498]: time="2025-01-29T16:21:02.814745960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:02.816845 containerd[1498]: time="2025-01-29T16:21:02.814877240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:02.816845 containerd[1498]: time="2025-01-29T16:21:02.815235840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 16:21:02.816845 containerd[1498]: time="2025-01-29T16:21:02.815417280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 16:21:02.816845 containerd[1498]: time="2025-01-29T16:21:02.815436160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 16:21:02.816845 containerd[1498]: time="2025-01-29T16:21:02.815522120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 16:21:02.816845 containerd[1498]: time="2025-01-29T16:21:02.815572640Z" level=info msg="metadata content store policy set" policy=shared Jan 29 16:21:02.827152 containerd[1498]: time="2025-01-29T16:21:02.827104120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 16:21:02.827527 containerd[1498]: time="2025-01-29T16:21:02.827504040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 16:21:02.827855 containerd[1498]: time="2025-01-29T16:21:02.827835640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 16:21:02.828468 containerd[1498]: time="2025-01-29T16:21:02.828448560Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 16:21:02.828539 containerd[1498]: time="2025-01-29T16:21:02.828525520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 16:21:02.828773 containerd[1498]: time="2025-01-29T16:21:02.828750720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 16:21:02.829417 containerd[1498]: time="2025-01-29T16:21:02.829390240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 16:21:02.830295 containerd[1498]: time="2025-01-29T16:21:02.830272440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 16:21:02.830774 containerd[1498]: time="2025-01-29T16:21:02.830754200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 16:21:02.830883 containerd[1498]: time="2025-01-29T16:21:02.830869080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 16:21:02.831000 containerd[1498]: time="2025-01-29T16:21:02.830973840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 16:21:02.831073 containerd[1498]: time="2025-01-29T16:21:02.831059640Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 16:21:02.831293 containerd[1498]: time="2025-01-29T16:21:02.831275640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 16:21:02.831374 containerd[1498]: time="2025-01-29T16:21:02.831361600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 16:21:02.831979 containerd[1498]: time="2025-01-29T16:21:02.831842200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 16:21:02.831979 containerd[1498]: time="2025-01-29T16:21:02.831866720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 16:21:02.831979 containerd[1498]: time="2025-01-29T16:21:02.831881040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 16:21:02.831979 containerd[1498]: time="2025-01-29T16:21:02.831892880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 16:21:02.831979 containerd[1498]: time="2025-01-29T16:21:02.831926600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.831979 containerd[1498]: time="2025-01-29T16:21:02.831942600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.831979 containerd[1498]: time="2025-01-29T16:21:02.831955080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.832278 containerd[1498]: time="2025-01-29T16:21:02.831969840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.832278 containerd[1498]: time="2025-01-29T16:21:02.832226600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.832278 containerd[1498]: time="2025-01-29T16:21:02.832241160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.832278 containerd[1498]: time="2025-01-29T16:21:02.832253440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.832461 containerd[1498]: time="2025-01-29T16:21:02.832444440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.832577 containerd[1498]: time="2025-01-29T16:21:02.832561600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.832901 containerd[1498]: time="2025-01-29T16:21:02.832883680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.833067 containerd[1498]: time="2025-01-29T16:21:02.832963400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.833067 containerd[1498]: time="2025-01-29T16:21:02.832982160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.833067 containerd[1498]: time="2025-01-29T16:21:02.833009880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.833379 containerd[1498]: time="2025-01-29T16:21:02.833314800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 16:21:02.833379 containerd[1498]: time="2025-01-29T16:21:02.833356080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.833789 containerd[1498]: time="2025-01-29T16:21:02.833502840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.833789 containerd[1498]: time="2025-01-29T16:21:02.833525360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 16:21:02.834248 containerd[1498]: time="2025-01-29T16:21:02.834222200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 16:21:02.834862 containerd[1498]: time="2025-01-29T16:21:02.834314680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 16:21:02.834862 containerd[1498]: time="2025-01-29T16:21:02.834332440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 16:21:02.834862 containerd[1498]: time="2025-01-29T16:21:02.834346560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 16:21:02.834862 containerd[1498]: time="2025-01-29T16:21:02.834356040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.835038 containerd[1498]: time="2025-01-29T16:21:02.834982560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 16:21:02.835107 containerd[1498]: time="2025-01-29T16:21:02.835094360Z" level=info msg="NRI interface is disabled by configuration." Jan 29 16:21:02.835166 containerd[1498]: time="2025-01-29T16:21:02.835145000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 16:21:02.836842 containerd[1498]: time="2025-01-29T16:21:02.836688760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 16:21:02.836842 containerd[1498]: time="2025-01-29T16:21:02.836763720Z" level=info msg="Connect containerd service" Jan 29 16:21:02.837268 containerd[1498]: time="2025-01-29T16:21:02.837096400Z" level=info msg="using legacy CRI server" Jan 29 16:21:02.837268 containerd[1498]: time="2025-01-29T16:21:02.837116160Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 16:21:02.838257 containerd[1498]: time="2025-01-29T16:21:02.837808240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 16:21:02.842463 containerd[1498]: time="2025-01-29T16:21:02.842400320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:21:02.843263 containerd[1498]: time="2025-01-29T16:21:02.843052640Z" level=info msg="Start subscribing containerd event" Jan 29 16:21:02.843263 containerd[1498]: time="2025-01-29T16:21:02.843132240Z" level=info msg="Start recovering state" Jan 29 16:21:02.843263 containerd[1498]: time="2025-01-29T16:21:02.843223040Z" level=info msg="Start event monitor" Jan 29 16:21:02.843263 containerd[1498]: time="2025-01-29T16:21:02.843236080Z" level=info msg="Start snapshots syncer" Jan 29 16:21:02.843263 containerd[1498]: time="2025-01-29T16:21:02.843246120Z" level=info msg="Start cni network conf syncer for default" Jan 29 16:21:02.843263 containerd[1498]: time="2025-01-29T16:21:02.843256040Z" level=info msg="Start streaming server" Jan 29 16:21:02.845508 containerd[1498]: time="2025-01-29T16:21:02.843957520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 16:21:02.845508 containerd[1498]: time="2025-01-29T16:21:02.844020760Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 16:21:02.845508 containerd[1498]: time="2025-01-29T16:21:02.844075480Z" level=info msg="containerd successfully booted in 0.096147s" Jan 29 16:21:02.844193 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 16:21:02.872987 systemd-networkd[1393]: eth0: Gained IPv6LL Jan 29 16:21:02.873605 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Jan 29 16:21:02.881643 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 16:21:02.885241 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 16:21:02.894166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:02.897573 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 16:21:02.963442 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 16:21:03.134943 tar[1492]: linux-arm64/LICENSE Jan 29 16:21:03.134943 tar[1492]: linux-arm64/README.md Jan 29 16:21:03.162874 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 16:21:03.271366 sshd_keygen[1502]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 16:21:03.302592 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 16:21:03.311346 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 16:21:03.316402 systemd[1]: Started sshd@0-167.235.198.80:22-103.142.199.159:36434.service - OpenSSH per-connection server daemon (103.142.199.159:36434). Jan 29 16:21:03.319799 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 16:21:03.321894 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 16:21:03.332682 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 16:21:03.352861 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 16:21:03.361322 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 16:21:03.369557 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 16:21:03.370437 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 16:21:03.641644 systemd-networkd[1393]: eth1: Gained IPv6LL Jan 29 16:21:03.642398 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Jan 29 16:21:03.800131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:03.801418 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 16:21:03.801770 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:03.802896 systemd[1]: Startup finished in 871ms (kernel) + 8.029s (initrd) + 5.116s (userspace) = 14.017s. Jan 29 16:21:04.128046 sshd[1582]: Invalid user asmon from 103.142.199.159 port 36434 Jan 29 16:21:04.269296 sshd[1582]: Received disconnect from 103.142.199.159 port 36434:11: Bye Bye [preauth] Jan 29 16:21:04.269296 sshd[1582]: Disconnected from invalid user asmon 103.142.199.159 port 36434 [preauth] Jan 29 16:21:04.271504 systemd[1]: sshd@0-167.235.198.80:22-103.142.199.159:36434.service: Deactivated successfully. Jan 29 16:21:04.490454 kubelet[1597]: E0129 16:21:04.490376 1597 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:04.495897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:04.496164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:04.496660 systemd[1]: kubelet.service: Consumed 955ms CPU time, 241.7M memory peak. Jan 29 16:21:07.704253 systemd[1]: Started sshd@1-167.235.198.80:22-134.122.8.241:58884.service - OpenSSH per-connection server daemon (134.122.8.241:58884). Jan 29 16:21:08.226737 sshd[1612]: Invalid user hung from 134.122.8.241 port 58884 Jan 29 16:21:08.319012 sshd[1612]: Received disconnect from 134.122.8.241 port 58884:11: Bye Bye [preauth] Jan 29 16:21:08.319012 sshd[1612]: Disconnected from invalid user hung 134.122.8.241 port 58884 [preauth] Jan 29 16:21:08.320896 systemd[1]: sshd@1-167.235.198.80:22-134.122.8.241:58884.service: Deactivated successfully. Jan 29 16:21:14.695026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 16:21:14.710196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:14.844731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:14.859380 (kubelet)[1624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:14.932117 kubelet[1624]: E0129 16:21:14.932017 1624 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:14.938619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:14.939107 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:14.939866 systemd[1]: kubelet.service: Consumed 176ms CPU time, 95.4M memory peak. Jan 29 16:21:24.443434 systemd[1]: Started sshd@2-167.235.198.80:22-149.50.252.131:46758.service - OpenSSH per-connection server daemon (149.50.252.131:46758). Jan 29 16:21:24.447497 systemd[1]: Started sshd@3-167.235.198.80:22-149.50.252.131:46764.service - OpenSSH per-connection server daemon (149.50.252.131:46764). Jan 29 16:21:24.610956 sshd[1634]: Connection closed by 149.50.252.131 port 46758 [preauth] Jan 29 16:21:24.613896 systemd[1]: sshd@2-167.235.198.80:22-149.50.252.131:46758.service: Deactivated successfully. Jan 29 16:21:24.646928 sshd[1635]: Connection closed by 149.50.252.131 port 46764 [preauth] Jan 29 16:21:24.647570 systemd[1]: sshd@3-167.235.198.80:22-149.50.252.131:46764.service: Deactivated successfully. Jan 29 16:21:24.685448 systemd[1]: Started sshd@4-167.235.198.80:22-149.50.252.131:34920.service - OpenSSH per-connection server daemon (149.50.252.131:34920). Jan 29 16:21:24.710341 systemd[1]: Started sshd@5-167.235.198.80:22-149.50.252.131:34922.service - OpenSSH per-connection server daemon (149.50.252.131:34922). Jan 29 16:21:24.874162 sshd[1643]: Connection closed by 149.50.252.131 port 34920 [preauth] Jan 29 16:21:24.876290 sshd[1645]: Connection closed by 149.50.252.131 port 34922 [preauth] Jan 29 16:21:24.875468 systemd[1]: sshd@4-167.235.198.80:22-149.50.252.131:34920.service: Deactivated successfully. Jan 29 16:21:24.878024 systemd[1]: sshd@5-167.235.198.80:22-149.50.252.131:34922.service: Deactivated successfully. Jan 29 16:21:24.944528 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 16:21:24.957661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:24.969564 systemd[1]: Started sshd@6-167.235.198.80:22-149.50.252.131:34932.service - OpenSSH per-connection server daemon (149.50.252.131:34932). Jan 29 16:21:24.997260 systemd[1]: Started sshd@7-167.235.198.80:22-149.50.252.131:34942.service - OpenSSH per-connection server daemon (149.50.252.131:34942). Jan 29 16:21:25.090551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:25.095353 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:25.149981 sshd[1654]: Connection closed by 149.50.252.131 port 34932 [preauth] Jan 29 16:21:25.153662 systemd[1]: sshd@6-167.235.198.80:22-149.50.252.131:34932.service: Deactivated successfully. Jan 29 16:21:25.161988 kubelet[1666]: E0129 16:21:25.161922 1666 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:25.165110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:25.165441 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:25.165950 systemd[1]: kubelet.service: Consumed 161ms CPU time, 96.8M memory peak. Jan 29 16:21:25.194984 sshd[1658]: Connection closed by 149.50.252.131 port 34942 [preauth] Jan 29 16:21:25.196854 systemd[1]: sshd@7-167.235.198.80:22-149.50.252.131:34942.service: Deactivated successfully. Jan 29 16:21:25.242349 systemd[1]: Started sshd@8-167.235.198.80:22-149.50.252.131:34954.service - OpenSSH per-connection server daemon (149.50.252.131:34954). Jan 29 16:21:25.280314 systemd[1]: Started sshd@9-167.235.198.80:22-149.50.252.131:34964.service - OpenSSH per-connection server daemon (149.50.252.131:34964). Jan 29 16:21:25.443002 sshd[1679]: Connection closed by 149.50.252.131 port 34954 [preauth] Jan 29 16:21:25.444557 systemd[1]: sshd@8-167.235.198.80:22-149.50.252.131:34954.service: Deactivated successfully. Jan 29 16:21:25.476759 sshd[1681]: Connection closed by 149.50.252.131 port 34964 [preauth] Jan 29 16:21:25.479119 systemd[1]: sshd@9-167.235.198.80:22-149.50.252.131:34964.service: Deactivated successfully. Jan 29 16:21:25.535357 systemd[1]: Started sshd@10-167.235.198.80:22-149.50.252.131:34968.service - OpenSSH per-connection server daemon (149.50.252.131:34968). Jan 29 16:21:25.550235 systemd[1]: Started sshd@11-167.235.198.80:22-149.50.252.131:34978.service - OpenSSH per-connection server daemon (149.50.252.131:34978). Jan 29 16:21:25.715618 sshd[1691]: Connection closed by 149.50.252.131 port 34978 [preauth] Jan 29 16:21:25.717748 systemd[1]: sshd@11-167.235.198.80:22-149.50.252.131:34978.service: Deactivated successfully. Jan 29 16:21:25.727272 sshd[1689]: Connection closed by 149.50.252.131 port 34968 [preauth] Jan 29 16:21:25.728864 systemd[1]: sshd@10-167.235.198.80:22-149.50.252.131:34968.service: Deactivated successfully. Jan 29 16:21:25.821939 systemd[1]: Started sshd@12-167.235.198.80:22-149.50.252.131:34986.service - OpenSSH per-connection server daemon (149.50.252.131:34986). Jan 29 16:21:25.827261 systemd[1]: Started sshd@13-167.235.198.80:22-149.50.252.131:34998.service - OpenSSH per-connection server daemon (149.50.252.131:34998). Jan 29 16:21:26.020786 sshd[1700]: Connection closed by 149.50.252.131 port 34986 [preauth] Jan 29 16:21:26.018203 systemd[1]: sshd@12-167.235.198.80:22-149.50.252.131:34986.service: Deactivated successfully. Jan 29 16:21:26.026926 sshd[1701]: Connection closed by 149.50.252.131 port 34998 [preauth] Jan 29 16:21:26.026435 systemd[1]: sshd@13-167.235.198.80:22-149.50.252.131:34998.service: Deactivated successfully. Jan 29 16:21:26.119693 systemd[1]: Started sshd@14-167.235.198.80:22-149.50.252.131:35010.service - OpenSSH per-connection server daemon (149.50.252.131:35010). Jan 29 16:21:26.124519 systemd[1]: Started sshd@15-167.235.198.80:22-149.50.252.131:35022.service - OpenSSH per-connection server daemon (149.50.252.131:35022). Jan 29 16:21:26.310102 sshd[1710]: Connection closed by 149.50.252.131 port 35010 [preauth] Jan 29 16:21:26.310302 systemd[1]: sshd@14-167.235.198.80:22-149.50.252.131:35010.service: Deactivated successfully. Jan 29 16:21:26.322798 sshd[1711]: Connection closed by 149.50.252.131 port 35022 [preauth] Jan 29 16:21:26.322589 systemd[1]: sshd@15-167.235.198.80:22-149.50.252.131:35022.service: Deactivated successfully. Jan 29 16:21:28.416397 systemd[1]: Started sshd@16-167.235.198.80:22-149.50.252.131:35028.service - OpenSSH per-connection server daemon (149.50.252.131:35028). Jan 29 16:21:28.420158 systemd[1]: Started sshd@17-167.235.198.80:22-149.50.252.131:35044.service - OpenSSH per-connection server daemon (149.50.252.131:35044). Jan 29 16:21:28.586460 sshd[1720]: Connection closed by 149.50.252.131 port 35028 [preauth] Jan 29 16:21:28.587709 systemd[1]: sshd@16-167.235.198.80:22-149.50.252.131:35028.service: Deactivated successfully. Jan 29 16:21:28.608887 sshd[1721]: Connection closed by 149.50.252.131 port 35044 [preauth] Jan 29 16:21:28.611744 systemd[1]: sshd@17-167.235.198.80:22-149.50.252.131:35044.service: Deactivated successfully. Jan 29 16:21:33.738583 systemd-timesyncd[1371]: Contacted time server 144.76.66.157:123 (2.flatcar.pool.ntp.org). Jan 29 16:21:33.738687 systemd-timesyncd[1371]: Initial clock synchronization to Wed 2025-01-29 16:21:33.357750 UTC. Jan 29 16:21:35.194404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 16:21:35.204121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:35.328154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:35.332907 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:35.381374 kubelet[1736]: E0129 16:21:35.381311 1736 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:35.384821 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:35.385133 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:35.387028 systemd[1]: kubelet.service: Consumed 159ms CPU time, 93.3M memory peak. Jan 29 16:21:36.555930 systemd[1]: Started sshd@18-167.235.198.80:22-149.50.252.131:41706.service - OpenSSH per-connection server daemon (149.50.252.131:41706). Jan 29 16:21:36.574346 systemd[1]: Started sshd@19-167.235.198.80:22-149.50.252.131:41716.service - OpenSSH per-connection server daemon (149.50.252.131:41716). Jan 29 16:21:36.715287 sshd[1745]: Connection closed by 149.50.252.131 port 41706 [preauth] Jan 29 16:21:36.717477 systemd[1]: sshd@18-167.235.198.80:22-149.50.252.131:41706.service: Deactivated successfully. Jan 29 16:21:36.738562 sshd[1747]: Connection closed by 149.50.252.131 port 41716 [preauth] Jan 29 16:21:36.739757 systemd[1]: sshd@19-167.235.198.80:22-149.50.252.131:41716.service: Deactivated successfully. Jan 29 16:21:45.444924 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 16:21:45.454589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:45.578766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:45.583369 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:45.644868 kubelet[1762]: E0129 16:21:45.642611 1762 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:45.647925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:45.648096 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:45.648772 systemd[1]: kubelet.service: Consumed 166ms CPU time, 94.6M memory peak. Jan 29 16:21:47.817757 update_engine[1477]: I20250129 16:21:47.816937 1477 update_attempter.cc:509] Updating boot flags... Jan 29 16:21:47.880859 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1779) Jan 29 16:21:47.955928 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1782) Jan 29 16:21:48.037893 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1782) Jan 29 16:21:54.557721 systemd[1]: Started sshd@20-167.235.198.80:22-149.50.252.131:40886.service - OpenSSH per-connection server daemon (149.50.252.131:40886). Jan 29 16:21:54.576342 systemd[1]: Started sshd@21-167.235.198.80:22-149.50.252.131:40902.service - OpenSSH per-connection server daemon (149.50.252.131:40902). Jan 29 16:21:54.747675 sshd[1792]: Connection closed by 149.50.252.131 port 40886 [preauth] Jan 29 16:21:54.749117 systemd[1]: sshd@20-167.235.198.80:22-149.50.252.131:40886.service: Deactivated successfully. Jan 29 16:21:54.780890 sshd[1794]: Connection closed by 149.50.252.131 port 40902 [preauth] Jan 29 16:21:54.781751 systemd[1]: sshd@21-167.235.198.80:22-149.50.252.131:40902.service: Deactivated successfully. Jan 29 16:21:55.694436 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 16:21:55.713517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:21:55.842115 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:21:55.856033 (kubelet)[1809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:21:55.915841 kubelet[1809]: E0129 16:21:55.915575 1809 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:21:55.921324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:21:55.921499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:21:55.921885 systemd[1]: kubelet.service: Consumed 180ms CPU time, 94.6M memory peak. Jan 29 16:22:05.945396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 16:22:05.956218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:22:06.080086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:22:06.092791 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:22:06.165103 kubelet[1823]: E0129 16:22:06.165002 1823 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:22:06.168996 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:22:06.169399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:22:06.169988 systemd[1]: kubelet.service: Consumed 177ms CPU time, 96.5M memory peak. Jan 29 16:22:15.997001 systemd[1]: Started sshd@22-167.235.198.80:22-134.122.8.241:57238.service - OpenSSH per-connection server daemon (134.122.8.241:57238). Jan 29 16:22:16.195178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 29 16:22:16.204194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:22:16.336426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:22:16.348425 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:22:16.404681 kubelet[1844]: E0129 16:22:16.404602 1844 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:22:16.407917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:22:16.408100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:22:16.408967 systemd[1]: kubelet.service: Consumed 166ms CPU time, 96M memory peak. Jan 29 16:22:16.525865 sshd[1834]: Invalid user teamspeak from 134.122.8.241 port 57238 Jan 29 16:22:16.619344 sshd[1834]: Received disconnect from 134.122.8.241 port 57238:11: Bye Bye [preauth] Jan 29 16:22:16.620867 sshd[1834]: Disconnected from invalid user teamspeak 134.122.8.241 port 57238 [preauth] Jan 29 16:22:16.621275 systemd[1]: sshd@22-167.235.198.80:22-134.122.8.241:57238.service: Deactivated successfully. Jan 29 16:22:26.444733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 29 16:22:26.456804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:22:26.605489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:22:26.617373 (kubelet)[1862]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:22:26.678365 kubelet[1862]: E0129 16:22:26.678295 1862 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:22:26.682079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:22:26.682889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:22:26.683349 systemd[1]: kubelet.service: Consumed 185ms CPU time, 95.1M memory peak. Jan 29 16:22:26.812325 systemd[1]: Started sshd@23-167.235.198.80:22-149.50.252.131:46490.service - OpenSSH per-connection server daemon (149.50.252.131:46490). Jan 29 16:22:26.839270 systemd[1]: Started sshd@24-167.235.198.80:22-149.50.252.131:46494.service - OpenSSH per-connection server daemon (149.50.252.131:46494). Jan 29 16:22:26.978477 sshd[1872]: Connection closed by 149.50.252.131 port 46490 [preauth] Jan 29 16:22:26.979421 systemd[1]: sshd@23-167.235.198.80:22-149.50.252.131:46490.service: Deactivated successfully. Jan 29 16:22:27.044393 sshd[1875]: Connection closed by 149.50.252.131 port 46494 [preauth] Jan 29 16:22:27.045263 systemd[1]: sshd@24-167.235.198.80:22-149.50.252.131:46494.service: Deactivated successfully. Jan 29 16:22:30.262220 systemd[1]: Started sshd@25-167.235.198.80:22-103.142.199.159:47340.service - OpenSSH per-connection server daemon (103.142.199.159:47340). Jan 29 16:22:31.220927 sshd[1882]: Received disconnect from 103.142.199.159 port 47340:11: Bye Bye [preauth] Jan 29 16:22:31.220927 sshd[1882]: Disconnected from authenticating user root 103.142.199.159 port 47340 [preauth] Jan 29 16:22:31.223404 systemd[1]: sshd@25-167.235.198.80:22-103.142.199.159:47340.service: Deactivated successfully. Jan 29 16:22:36.695548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 29 16:22:36.706663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:22:36.894073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:22:36.900643 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:22:36.967563 kubelet[1893]: E0129 16:22:36.966367 1893 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:22:36.970596 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:22:36.970765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:22:36.971576 systemd[1]: kubelet.service: Consumed 209ms CPU time, 94.5M memory peak. Jan 29 16:22:47.194470 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 29 16:22:47.215393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:22:47.418487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:22:47.425009 (kubelet)[1910]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:22:47.483063 kubelet[1910]: E0129 16:22:47.482902 1910 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:22:47.486433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:22:47.486607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:22:47.489322 systemd[1]: kubelet.service: Consumed 228ms CPU time, 96.8M memory peak. Jan 29 16:22:53.133326 systemd[1]: Started sshd@26-167.235.198.80:22-139.178.68.195:52060.service - OpenSSH per-connection server daemon (139.178.68.195:52060). Jan 29 16:22:54.121876 sshd[1919]: Accepted publickey for core from 139.178.68.195 port 52060 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:22:54.125937 sshd-session[1919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:54.142411 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 16:22:54.153587 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 16:22:54.166647 systemd-logind[1475]: New session 1 of user core. Jan 29 16:22:54.178311 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 16:22:54.188604 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 16:22:54.196037 (systemd)[1923]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 16:22:54.204018 systemd-logind[1475]: New session c1 of user core. Jan 29 16:22:54.356484 systemd[1923]: Queued start job for default target default.target. Jan 29 16:22:54.365880 systemd[1923]: Created slice app.slice - User Application Slice. Jan 29 16:22:54.365945 systemd[1923]: Reached target paths.target - Paths. Jan 29 16:22:54.366027 systemd[1923]: Reached target timers.target - Timers. Jan 29 16:22:54.368727 systemd[1923]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 16:22:54.380917 systemd[1923]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 16:22:54.381881 systemd[1923]: Reached target sockets.target - Sockets. Jan 29 16:22:54.382012 systemd[1923]: Reached target basic.target - Basic System. Jan 29 16:22:54.382048 systemd[1923]: Reached target default.target - Main User Target. Jan 29 16:22:54.382079 systemd[1923]: Startup finished in 166ms. Jan 29 16:22:54.382202 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 16:22:54.390564 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 16:22:55.099613 systemd[1]: Started sshd@27-167.235.198.80:22-139.178.68.195:41364.service - OpenSSH per-connection server daemon (139.178.68.195:41364). Jan 29 16:22:56.104293 sshd[1934]: Accepted publickey for core from 139.178.68.195 port 41364 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:22:56.107135 sshd-session[1934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:56.114833 systemd-logind[1475]: New session 2 of user core. Jan 29 16:22:56.125105 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 16:22:56.793869 sshd[1936]: Connection closed by 139.178.68.195 port 41364 Jan 29 16:22:56.793746 sshd-session[1934]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:56.798286 systemd[1]: sshd@27-167.235.198.80:22-139.178.68.195:41364.service: Deactivated successfully. Jan 29 16:22:56.800323 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 16:22:56.806525 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. Jan 29 16:22:56.808092 systemd-logind[1475]: Removed session 2. Jan 29 16:22:56.979569 systemd[1]: Started sshd@28-167.235.198.80:22-139.178.68.195:41374.service - OpenSSH per-connection server daemon (139.178.68.195:41374). Jan 29 16:22:57.694459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 29 16:22:57.704132 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:22:57.854083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:22:57.855471 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:22:57.912509 kubelet[1952]: E0129 16:22:57.912429 1952 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:22:57.916051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:22:57.916253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:22:57.917168 systemd[1]: kubelet.service: Consumed 168ms CPU time, 96.3M memory peak. Jan 29 16:22:57.954584 sshd[1942]: Accepted publickey for core from 139.178.68.195 port 41374 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:22:57.956664 sshd-session[1942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:57.962446 systemd-logind[1475]: New session 3 of user core. Jan 29 16:22:57.974151 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 16:22:58.629357 sshd[1960]: Connection closed by 139.178.68.195 port 41374 Jan 29 16:22:58.628516 sshd-session[1942]: pam_unix(sshd:session): session closed for user core Jan 29 16:22:58.634465 systemd[1]: sshd@28-167.235.198.80:22-139.178.68.195:41374.service: Deactivated successfully. Jan 29 16:22:58.639522 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 16:22:58.640805 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. Jan 29 16:22:58.642211 systemd-logind[1475]: Removed session 3. Jan 29 16:22:58.804700 systemd[1]: Started sshd@29-167.235.198.80:22-139.178.68.195:41388.service - OpenSSH per-connection server daemon (139.178.68.195:41388). Jan 29 16:22:59.802577 sshd[1966]: Accepted publickey for core from 139.178.68.195 port 41388 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:22:59.805431 sshd-session[1966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:22:59.810711 systemd-logind[1475]: New session 4 of user core. Jan 29 16:22:59.823567 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 16:23:00.491869 sshd[1968]: Connection closed by 139.178.68.195 port 41388 Jan 29 16:23:00.492843 sshd-session[1966]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:00.500335 systemd[1]: sshd@29-167.235.198.80:22-139.178.68.195:41388.service: Deactivated successfully. Jan 29 16:23:00.503350 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 16:23:00.508381 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. Jan 29 16:23:00.511240 systemd-logind[1475]: Removed session 4. Jan 29 16:23:00.679314 systemd[1]: Started sshd@30-167.235.198.80:22-139.178.68.195:41396.service - OpenSSH per-connection server daemon (139.178.68.195:41396). Jan 29 16:23:01.663396 sshd[1974]: Accepted publickey for core from 139.178.68.195 port 41396 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:23:01.665417 sshd-session[1974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:01.672711 systemd-logind[1475]: New session 5 of user core. Jan 29 16:23:01.679181 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 16:23:02.202157 sudo[1977]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 16:23:02.203746 sudo[1977]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:23:02.228009 sudo[1977]: pam_unix(sudo:session): session closed for user root Jan 29 16:23:02.390991 sshd[1976]: Connection closed by 139.178.68.195 port 41396 Jan 29 16:23:02.392018 sshd-session[1974]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:02.402553 systemd[1]: sshd@30-167.235.198.80:22-139.178.68.195:41396.service: Deactivated successfully. Jan 29 16:23:02.408521 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 16:23:02.410949 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. Jan 29 16:23:02.413291 systemd-logind[1475]: Removed session 5. Jan 29 16:23:02.566259 systemd[1]: Started sshd@31-167.235.198.80:22-139.178.68.195:41400.service - OpenSSH per-connection server daemon (139.178.68.195:41400). Jan 29 16:23:03.554546 sshd[1983]: Accepted publickey for core from 139.178.68.195 port 41400 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:23:03.557223 sshd-session[1983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:03.562392 systemd-logind[1475]: New session 6 of user core. Jan 29 16:23:03.570154 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 16:23:04.077910 sudo[1987]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 16:23:04.078210 sudo[1987]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:23:04.082284 sudo[1987]: pam_unix(sudo:session): session closed for user root Jan 29 16:23:04.088654 sudo[1986]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 16:23:04.089376 sudo[1986]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:23:04.109360 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 16:23:04.143260 augenrules[2009]: No rules Jan 29 16:23:04.145451 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 16:23:04.146994 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 16:23:04.148235 sudo[1986]: pam_unix(sudo:session): session closed for user root Jan 29 16:23:04.307348 sshd[1985]: Connection closed by 139.178.68.195 port 41400 Jan 29 16:23:04.308862 sshd-session[1983]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:04.313405 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. Jan 29 16:23:04.314439 systemd[1]: sshd@31-167.235.198.80:22-139.178.68.195:41400.service: Deactivated successfully. Jan 29 16:23:04.317008 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 16:23:04.319454 systemd-logind[1475]: Removed session 6. Jan 29 16:23:04.489232 systemd[1]: Started sshd@32-167.235.198.80:22-139.178.68.195:41402.service - OpenSSH per-connection server daemon (139.178.68.195:41402). Jan 29 16:23:05.492439 sshd[2018]: Accepted publickey for core from 139.178.68.195 port 41402 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:23:05.494449 sshd-session[2018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:23:05.508463 systemd-logind[1475]: New session 7 of user core. Jan 29 16:23:05.519147 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 16:23:06.014672 sudo[2021]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 16:23:06.015033 sudo[2021]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 16:23:06.395280 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 16:23:06.395746 (dockerd)[2038]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 16:23:06.669064 dockerd[2038]: time="2025-01-29T16:23:06.668921824Z" level=info msg="Starting up" Jan 29 16:23:06.789371 dockerd[2038]: time="2025-01-29T16:23:06.788969025Z" level=info msg="Loading containers: start." Jan 29 16:23:07.001874 kernel: Initializing XFRM netlink socket Jan 29 16:23:07.136433 systemd-networkd[1393]: docker0: Link UP Jan 29 16:23:07.185247 dockerd[2038]: time="2025-01-29T16:23:07.184049584Z" level=info msg="Loading containers: done." Jan 29 16:23:07.205549 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3035283228-merged.mount: Deactivated successfully. Jan 29 16:23:07.210067 dockerd[2038]: time="2025-01-29T16:23:07.209892672Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 16:23:07.210604 dockerd[2038]: time="2025-01-29T16:23:07.210153805Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 29 16:23:07.210916 dockerd[2038]: time="2025-01-29T16:23:07.210688432Z" level=info msg="Daemon has completed initialization" Jan 29 16:23:07.272627 dockerd[2038]: time="2025-01-29T16:23:07.271618631Z" level=info msg="API listen on /run/docker.sock" Jan 29 16:23:07.271970 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 16:23:07.945611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 29 16:23:07.956276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:08.123230 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:08.124946 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:23:08.186109 kubelet[2235]: E0129 16:23:08.186015 2235 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:23:08.189878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:23:08.190088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:23:08.190928 systemd[1]: kubelet.service: Consumed 187ms CPU time, 96.3M memory peak. Jan 29 16:23:08.533642 containerd[1498]: time="2025-01-29T16:23:08.533164962Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 16:23:09.159083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461731162.mount: Deactivated successfully. Jan 29 16:23:11.218923 containerd[1498]: time="2025-01-29T16:23:11.218082930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:11.220393 containerd[1498]: time="2025-01-29T16:23:11.220332444Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29865027" Jan 29 16:23:11.222013 containerd[1498]: time="2025-01-29T16:23:11.221910491Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:11.226771 containerd[1498]: time="2025-01-29T16:23:11.226702074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:11.229401 containerd[1498]: time="2025-01-29T16:23:11.229315660Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.696094456s" Jan 29 16:23:11.229401 containerd[1498]: time="2025-01-29T16:23:11.229374779Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 29 16:23:11.256228 containerd[1498]: time="2025-01-29T16:23:11.256126993Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 16:23:13.590262 containerd[1498]: time="2025-01-29T16:23:13.590197009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:13.592304 containerd[1498]: time="2025-01-29T16:23:13.592040458Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901581" Jan 29 16:23:13.594841 containerd[1498]: time="2025-01-29T16:23:13.594085303Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:13.597433 containerd[1498]: time="2025-01-29T16:23:13.597368967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:13.598735 containerd[1498]: time="2025-01-29T16:23:13.598648066Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 2.342337877s" Jan 29 16:23:13.598954 containerd[1498]: time="2025-01-29T16:23:13.598929021Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 29 16:23:13.623561 containerd[1498]: time="2025-01-29T16:23:13.623500884Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 16:23:15.199008 containerd[1498]: time="2025-01-29T16:23:15.197469394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:15.199008 containerd[1498]: time="2025-01-29T16:23:15.198946334Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164358" Jan 29 16:23:15.200307 containerd[1498]: time="2025-01-29T16:23:15.200241636Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:15.205420 containerd[1498]: time="2025-01-29T16:23:15.205344406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:15.208124 containerd[1498]: time="2025-01-29T16:23:15.207852012Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.583982134s" Jan 29 16:23:15.208124 containerd[1498]: time="2025-01-29T16:23:15.207903171Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 29 16:23:15.231433 containerd[1498]: time="2025-01-29T16:23:15.231388130Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 16:23:16.178260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1688817872.mount: Deactivated successfully. Jan 29 16:23:16.555185 containerd[1498]: time="2025-01-29T16:23:16.554273172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:16.556799 containerd[1498]: time="2025-01-29T16:23:16.556706622Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662738" Jan 29 16:23:16.558669 containerd[1498]: time="2025-01-29T16:23:16.558588080Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:16.562363 containerd[1498]: time="2025-01-29T16:23:16.561898999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:16.563049 containerd[1498]: time="2025-01-29T16:23:16.562998786Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.331566776s" Jan 29 16:23:16.563049 containerd[1498]: time="2025-01-29T16:23:16.563042386Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 16:23:16.590841 containerd[1498]: time="2025-01-29T16:23:16.590772409Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 16:23:17.071236 systemd[1]: Started sshd@33-167.235.198.80:22-149.50.252.131:60032.service - OpenSSH per-connection server daemon (149.50.252.131:60032). Jan 29 16:23:17.131218 systemd[1]: Started sshd@34-167.235.198.80:22-149.50.252.131:60048.service - OpenSSH per-connection server daemon (149.50.252.131:60048). Jan 29 16:23:17.213694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263417667.mount: Deactivated successfully. Jan 29 16:23:17.261349 sshd[2338]: Connection closed by 149.50.252.131 port 60032 [preauth] Jan 29 16:23:17.263275 systemd[1]: sshd@33-167.235.198.80:22-149.50.252.131:60032.service: Deactivated successfully. Jan 29 16:23:17.320406 sshd[2341]: Connection closed by 149.50.252.131 port 60048 [preauth] Jan 29 16:23:17.323169 systemd[1]: sshd@34-167.235.198.80:22-149.50.252.131:60048.service: Deactivated successfully. Jan 29 16:23:17.918909 containerd[1498]: time="2025-01-29T16:23:17.918786648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:17.922385 containerd[1498]: time="2025-01-29T16:23:17.921915655Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 29 16:23:17.924971 containerd[1498]: time="2025-01-29T16:23:17.924739305Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:17.929322 containerd[1498]: time="2025-01-29T16:23:17.929229697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:17.931481 containerd[1498]: time="2025-01-29T16:23:17.930409645Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.339572556s" Jan 29 16:23:17.931481 containerd[1498]: time="2025-01-29T16:23:17.930460524Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 29 16:23:17.960161 containerd[1498]: time="2025-01-29T16:23:17.960044970Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 16:23:18.195119 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 29 16:23:18.216193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:18.361076 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:18.364213 (kubelet)[2404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:23:18.426649 kubelet[2404]: E0129 16:23:18.426576 2404 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:23:18.430160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:23:18.430341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:23:18.431629 systemd[1]: kubelet.service: Consumed 171ms CPU time, 94.6M memory peak. Jan 29 16:23:18.584208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4123728681.mount: Deactivated successfully. Jan 29 16:23:18.597005 containerd[1498]: time="2025-01-29T16:23:18.596942036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:18.601833 containerd[1498]: time="2025-01-29T16:23:18.601699752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 29 16:23:18.607240 containerd[1498]: time="2025-01-29T16:23:18.603853213Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:18.611800 containerd[1498]: time="2025-01-29T16:23:18.611273705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:18.612189 containerd[1498]: time="2025-01-29T16:23:18.612122057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 652.018648ms" Jan 29 16:23:18.612448 containerd[1498]: time="2025-01-29T16:23:18.612371135Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 29 16:23:18.664016 containerd[1498]: time="2025-01-29T16:23:18.663965382Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 16:23:19.312393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount535709539.mount: Deactivated successfully. Jan 29 16:23:22.817800 systemd[1]: Started sshd@35-167.235.198.80:22-134.122.8.241:55588.service - OpenSSH per-connection server daemon (134.122.8.241:55588). Jan 29 16:23:23.345571 sshd[2462]: Invalid user user2 from 134.122.8.241 port 55588 Jan 29 16:23:23.439150 sshd[2462]: Received disconnect from 134.122.8.241 port 55588:11: Bye Bye [preauth] Jan 29 16:23:23.440867 sshd[2462]: Disconnected from invalid user user2 134.122.8.241 port 55588 [preauth] Jan 29 16:23:23.443287 systemd[1]: sshd@35-167.235.198.80:22-134.122.8.241:55588.service: Deactivated successfully. Jan 29 16:23:23.603603 containerd[1498]: time="2025-01-29T16:23:23.603413623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:23.607288 containerd[1498]: time="2025-01-29T16:23:23.607220973Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Jan 29 16:23:23.610863 containerd[1498]: time="2025-01-29T16:23:23.609372568Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:23.615771 containerd[1498]: time="2025-01-29T16:23:23.615702712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:23.621870 containerd[1498]: time="2025-01-29T16:23:23.621784936Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.957767034s" Jan 29 16:23:23.622122 containerd[1498]: time="2025-01-29T16:23:23.622080976Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 29 16:23:28.444216 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 29 16:23:28.454056 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:28.638028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:28.649791 (kubelet)[2536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 16:23:28.701938 kubelet[2536]: E0129 16:23:28.701645 2536 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 16:23:28.705756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 16:23:28.705920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 16:23:28.706548 systemd[1]: kubelet.service: Consumed 165ms CPU time, 94.3M memory peak. Jan 29 16:23:29.554114 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:29.554637 systemd[1]: kubelet.service: Consumed 165ms CPU time, 94.3M memory peak. Jan 29 16:23:29.562218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:29.598965 systemd[1]: Reload requested from client PID 2550 ('systemctl') (unit session-7.scope)... Jan 29 16:23:29.598986 systemd[1]: Reloading... Jan 29 16:23:29.752850 zram_generator::config[2598]: No configuration found. Jan 29 16:23:29.859873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:23:29.969161 systemd[1]: Reloading finished in 369 ms. Jan 29 16:23:30.033866 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:30.035520 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:23:30.035766 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:30.036924 systemd[1]: kubelet.service: Consumed 106ms CPU time, 81.5M memory peak. Jan 29 16:23:30.052235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:30.213794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:30.233147 (kubelet)[2645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:23:30.290982 kubelet[2645]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:23:30.290982 kubelet[2645]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:23:30.290982 kubelet[2645]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:23:30.290982 kubelet[2645]: I0129 16:23:30.289296 2645 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:23:31.180333 kubelet[2645]: I0129 16:23:31.180250 2645 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:23:31.180333 kubelet[2645]: I0129 16:23:31.180313 2645 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:23:31.186975 kubelet[2645]: I0129 16:23:31.184804 2645 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:23:31.212986 kubelet[2645]: E0129 16:23:31.212941 2645 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://167.235.198.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:31.213847 kubelet[2645]: I0129 16:23:31.213735 2645 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:23:31.226738 kubelet[2645]: I0129 16:23:31.226695 2645 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:23:31.229199 kubelet[2645]: I0129 16:23:31.229115 2645 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:23:31.229674 kubelet[2645]: I0129 16:23:31.229432 2645 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-e-139a7b6c18","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:23:31.229923 kubelet[2645]: I0129 16:23:31.229906 2645 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:23:31.229987 kubelet[2645]: I0129 16:23:31.229979 2645 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:23:31.230396 kubelet[2645]: I0129 16:23:31.230375 2645 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:23:31.232070 kubelet[2645]: I0129 16:23:31.232039 2645 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:23:31.232245 kubelet[2645]: I0129 16:23:31.232233 2645 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:23:31.232488 kubelet[2645]: I0129 16:23:31.232476 2645 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:23:31.232558 kubelet[2645]: I0129 16:23:31.232550 2645 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:23:31.234104 kubelet[2645]: W0129 16:23:31.234030 2645 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://167.235.198.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-e-139a7b6c18&limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:31.234334 kubelet[2645]: E0129 16:23:31.234320 2645 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://167.235.198.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-e-139a7b6c18&limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:31.234565 kubelet[2645]: W0129 16:23:31.234527 2645 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://167.235.198.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:31.235046 kubelet[2645]: E0129 16:23:31.235024 2645 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://167.235.198.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:31.235893 kubelet[2645]: I0129 16:23:31.235444 2645 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:23:31.236033 kubelet[2645]: I0129 16:23:31.236019 2645 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:23:31.236208 kubelet[2645]: W0129 16:23:31.236196 2645 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 16:23:31.237667 kubelet[2645]: I0129 16:23:31.237636 2645 server.go:1264] "Started kubelet" Jan 29 16:23:31.244097 kubelet[2645]: E0129 16:23:31.243030 2645 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://167.235.198.80:6443/api/v1/namespaces/default/events\": dial tcp 167.235.198.80:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-0-e-139a7b6c18.181f366a44d5efa7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-e-139a7b6c18,UID:ci-4230-0-0-e-139a7b6c18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-e-139a7b6c18,},FirstTimestamp:2025-01-29 16:23:31.237605287 +0000 UTC m=+0.999622000,LastTimestamp:2025-01-29 16:23:31.237605287 +0000 UTC m=+0.999622000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-e-139a7b6c18,}" Jan 29 16:23:31.244097 kubelet[2645]: I0129 16:23:31.243712 2645 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:23:31.244680 kubelet[2645]: I0129 16:23:31.244436 2645 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:23:31.247982 kubelet[2645]: I0129 16:23:31.247939 2645 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:23:31.250869 kubelet[2645]: I0129 16:23:31.249393 2645 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:23:31.250869 kubelet[2645]: I0129 16:23:31.249614 2645 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:23:31.252437 kubelet[2645]: I0129 16:23:31.252403 2645 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:23:31.255745 kubelet[2645]: I0129 16:23:31.255698 2645 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:23:31.257339 kubelet[2645]: I0129 16:23:31.257307 2645 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:23:31.262448 kubelet[2645]: W0129 16:23:31.262331 2645 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://167.235.198.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:31.262784 kubelet[2645]: E0129 16:23:31.262763 2645 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://167.235.198.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:31.263272 kubelet[2645]: E0129 16:23:31.263200 2645 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-e-139a7b6c18?timeout=10s\": dial tcp 167.235.198.80:6443: connect: connection refused" interval="200ms" Jan 29 16:23:31.263586 kubelet[2645]: E0129 16:23:31.263564 2645 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:23:31.265500 kubelet[2645]: I0129 16:23:31.265460 2645 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:23:31.265884 kubelet[2645]: I0129 16:23:31.265858 2645 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:23:31.271867 kubelet[2645]: I0129 16:23:31.270680 2645 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:23:31.288917 kubelet[2645]: I0129 16:23:31.288787 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:23:31.290598 kubelet[2645]: I0129 16:23:31.290528 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:23:31.290800 kubelet[2645]: I0129 16:23:31.290725 2645 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:23:31.290800 kubelet[2645]: I0129 16:23:31.290766 2645 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:23:31.290961 kubelet[2645]: E0129 16:23:31.290906 2645 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:23:31.299836 kubelet[2645]: W0129 16:23:31.299740 2645 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://167.235.198.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:31.300606 kubelet[2645]: E0129 16:23:31.300310 2645 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://167.235.198.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:31.307848 kubelet[2645]: I0129 16:23:31.307754 2645 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:23:31.307848 kubelet[2645]: I0129 16:23:31.307778 2645 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:23:31.307848 kubelet[2645]: I0129 16:23:31.307805 2645 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:23:31.310264 kubelet[2645]: I0129 16:23:31.310227 2645 policy_none.go:49] "None policy: Start" Jan 29 16:23:31.311988 kubelet[2645]: I0129 16:23:31.311509 2645 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:23:31.311988 kubelet[2645]: I0129 16:23:31.311550 2645 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:23:31.320925 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 16:23:31.335932 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 16:23:31.341674 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 16:23:31.351730 kubelet[2645]: I0129 16:23:31.351677 2645 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:23:31.352859 kubelet[2645]: I0129 16:23:31.352279 2645 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:23:31.352859 kubelet[2645]: I0129 16:23:31.352698 2645 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:23:31.357646 kubelet[2645]: I0129 16:23:31.357610 2645 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.359235 kubelet[2645]: E0129 16:23:31.359182 2645 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://167.235.198.80:6443/api/v1/nodes\": dial tcp 167.235.198.80:6443: connect: connection refused" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.359574 kubelet[2645]: E0129 16:23:31.359552 2645 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-0-e-139a7b6c18\" not found" Jan 29 16:23:31.391957 kubelet[2645]: I0129 16:23:31.391857 2645 topology_manager.go:215] "Topology Admit Handler" podUID="65a5389adc32590f2f0bf8e7a7caad99" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.395896 kubelet[2645]: I0129 16:23:31.395760 2645 topology_manager.go:215] "Topology Admit Handler" podUID="6db7b489acb9c0d50ae13e6fea9b04cb" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.399084 kubelet[2645]: I0129 16:23:31.398780 2645 topology_manager.go:215] "Topology Admit Handler" podUID="a6a1b314d60f03ba7f43df755cbf4185" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.411971 systemd[1]: Created slice kubepods-burstable-pod65a5389adc32590f2f0bf8e7a7caad99.slice - libcontainer container kubepods-burstable-pod65a5389adc32590f2f0bf8e7a7caad99.slice. Jan 29 16:23:31.440116 systemd[1]: Created slice kubepods-burstable-pod6db7b489acb9c0d50ae13e6fea9b04cb.slice - libcontainer container kubepods-burstable-pod6db7b489acb9c0d50ae13e6fea9b04cb.slice. Jan 29 16:23:31.458487 kubelet[2645]: I0129 16:23:31.458046 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6db7b489acb9c0d50ae13e6fea9b04cb-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-e-139a7b6c18\" (UID: \"6db7b489acb9c0d50ae13e6fea9b04cb\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.458487 kubelet[2645]: I0129 16:23:31.458113 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6db7b489acb9c0d50ae13e6fea9b04cb-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-e-139a7b6c18\" (UID: \"6db7b489acb9c0d50ae13e6fea9b04cb\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.458487 kubelet[2645]: I0129 16:23:31.458144 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6db7b489acb9c0d50ae13e6fea9b04cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-e-139a7b6c18\" (UID: \"6db7b489acb9c0d50ae13e6fea9b04cb\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.458487 kubelet[2645]: I0129 16:23:31.458173 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6a1b314d60f03ba7f43df755cbf4185-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-e-139a7b6c18\" (UID: \"a6a1b314d60f03ba7f43df755cbf4185\") " pod="kube-system/kube-scheduler-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.458487 kubelet[2645]: I0129 16:23:31.458208 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65a5389adc32590f2f0bf8e7a7caad99-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-e-139a7b6c18\" (UID: \"65a5389adc32590f2f0bf8e7a7caad99\") " pod="kube-system/kube-apiserver-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.459075 kubelet[2645]: I0129 16:23:31.458231 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6db7b489acb9c0d50ae13e6fea9b04cb-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-e-139a7b6c18\" (UID: \"6db7b489acb9c0d50ae13e6fea9b04cb\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.459075 kubelet[2645]: I0129 16:23:31.458249 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6db7b489acb9c0d50ae13e6fea9b04cb-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-e-139a7b6c18\" (UID: \"6db7b489acb9c0d50ae13e6fea9b04cb\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.459075 kubelet[2645]: I0129 16:23:31.458282 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65a5389adc32590f2f0bf8e7a7caad99-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-e-139a7b6c18\" (UID: \"65a5389adc32590f2f0bf8e7a7caad99\") " pod="kube-system/kube-apiserver-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.459075 kubelet[2645]: I0129 16:23:31.458303 2645 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65a5389adc32590f2f0bf8e7a7caad99-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-e-139a7b6c18\" (UID: \"65a5389adc32590f2f0bf8e7a7caad99\") " pod="kube-system/kube-apiserver-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.459129 systemd[1]: Created slice kubepods-burstable-poda6a1b314d60f03ba7f43df755cbf4185.slice - libcontainer container kubepods-burstable-poda6a1b314d60f03ba7f43df755cbf4185.slice. Jan 29 16:23:31.464159 kubelet[2645]: E0129 16:23:31.464063 2645 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-e-139a7b6c18?timeout=10s\": dial tcp 167.235.198.80:6443: connect: connection refused" interval="400ms" Jan 29 16:23:31.562884 kubelet[2645]: I0129 16:23:31.562838 2645 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.563608 kubelet[2645]: E0129 16:23:31.563558 2645 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://167.235.198.80:6443/api/v1/nodes\": dial tcp 167.235.198.80:6443: connect: connection refused" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.662498 kubelet[2645]: E0129 16:23:31.662325 2645 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://167.235.198.80:6443/api/v1/namespaces/default/events\": dial tcp 167.235.198.80:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-0-e-139a7b6c18.181f366a44d5efa7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-e-139a7b6c18,UID:ci-4230-0-0-e-139a7b6c18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-e-139a7b6c18,},FirstTimestamp:2025-01-29 16:23:31.237605287 +0000 UTC m=+0.999622000,LastTimestamp:2025-01-29 16:23:31.237605287 +0000 UTC m=+0.999622000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-e-139a7b6c18,}" Jan 29 16:23:31.735804 containerd[1498]: time="2025-01-29T16:23:31.735603183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-e-139a7b6c18,Uid:65a5389adc32590f2f0bf8e7a7caad99,Namespace:kube-system,Attempt:0,}" Jan 29 16:23:31.760842 containerd[1498]: time="2025-01-29T16:23:31.760118093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-e-139a7b6c18,Uid:6db7b489acb9c0d50ae13e6fea9b04cb,Namespace:kube-system,Attempt:0,}" Jan 29 16:23:31.770600 containerd[1498]: time="2025-01-29T16:23:31.770545277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-e-139a7b6c18,Uid:a6a1b314d60f03ba7f43df755cbf4185,Namespace:kube-system,Attempt:0,}" Jan 29 16:23:31.865770 kubelet[2645]: E0129 16:23:31.865695 2645 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-e-139a7b6c18?timeout=10s\": dial tcp 167.235.198.80:6443: connect: connection refused" interval="800ms" Jan 29 16:23:31.971391 kubelet[2645]: I0129 16:23:31.970115 2645 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:31.971391 kubelet[2645]: E0129 16:23:31.970698 2645 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://167.235.198.80:6443/api/v1/nodes\": dial tcp 167.235.198.80:6443: connect: connection refused" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:32.358880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157325306.mount: Deactivated successfully. Jan 29 16:23:32.370874 containerd[1498]: time="2025-01-29T16:23:32.370430545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:23:32.374399 containerd[1498]: time="2025-01-29T16:23:32.374297652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 29 16:23:32.377537 kubelet[2645]: W0129 16:23:32.377454 2645 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://167.235.198.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:32.377537 kubelet[2645]: E0129 16:23:32.377507 2645 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://167.235.198.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:32.382612 containerd[1498]: time="2025-01-29T16:23:32.381508223Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:23:32.386294 containerd[1498]: time="2025-01-29T16:23:32.386078216Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:23:32.387793 containerd[1498]: time="2025-01-29T16:23:32.387695027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:23:32.393883 containerd[1498]: time="2025-01-29T16:23:32.393232546Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:23:32.395655 containerd[1498]: time="2025-01-29T16:23:32.395472682Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 16:23:32.396701 containerd[1498]: time="2025-01-29T16:23:32.396655651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 16:23:32.398532 containerd[1498]: time="2025-01-29T16:23:32.398395183Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 662.662239ms" Jan 29 16:23:32.402869 containerd[1498]: time="2025-01-29T16:23:32.402072689Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 631.379451ms" Jan 29 16:23:32.420232 containerd[1498]: time="2025-01-29T16:23:32.420179777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 659.862522ms" Jan 29 16:23:32.464628 kubelet[2645]: W0129 16:23:32.464530 2645 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://167.235.198.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-e-139a7b6c18&limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:32.464628 kubelet[2645]: E0129 16:23:32.464625 2645 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://167.235.198.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-0-e-139a7b6c18&limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:32.497478 kubelet[2645]: W0129 16:23:32.497338 2645 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://167.235.198.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:32.498500 kubelet[2645]: E0129 16:23:32.497901 2645 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://167.235.198.80:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:32.566840 containerd[1498]: time="2025-01-29T16:23:32.566274610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:23:32.566840 containerd[1498]: time="2025-01-29T16:23:32.566483852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:23:32.566840 containerd[1498]: time="2025-01-29T16:23:32.566503652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:32.566840 containerd[1498]: time="2025-01-29T16:23:32.566607893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:32.571652 containerd[1498]: time="2025-01-29T16:23:32.569639594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:23:32.571652 containerd[1498]: time="2025-01-29T16:23:32.569710715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:23:32.571652 containerd[1498]: time="2025-01-29T16:23:32.569723435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:32.571652 containerd[1498]: time="2025-01-29T16:23:32.569836635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:32.572064 containerd[1498]: time="2025-01-29T16:23:32.571396567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:23:32.572064 containerd[1498]: time="2025-01-29T16:23:32.571474447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:23:32.572064 containerd[1498]: time="2025-01-29T16:23:32.571491447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:32.572064 containerd[1498]: time="2025-01-29T16:23:32.571581808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:32.604122 systemd[1]: Started cri-containerd-20ee849a4cd440f97b06077ca301f79821a5e1002421745209d8bed915b62cc2.scope - libcontainer container 20ee849a4cd440f97b06077ca301f79821a5e1002421745209d8bed915b62cc2. Jan 29 16:23:32.607002 systemd[1]: Started cri-containerd-9aed00242f03908ab0f6cc176b2654755316fe32638d781d7812721bfb200466.scope - libcontainer container 9aed00242f03908ab0f6cc176b2654755316fe32638d781d7812721bfb200466. Jan 29 16:23:32.628322 systemd[1]: Started cri-containerd-08732233af9dacedbdb85ad5f549581f3cef3562528dae96f62ef0a983d10449.scope - libcontainer container 08732233af9dacedbdb85ad5f549581f3cef3562528dae96f62ef0a983d10449. Jan 29 16:23:32.669934 kubelet[2645]: E0129 16:23:32.668581 2645 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.198.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-0-e-139a7b6c18?timeout=10s\": dial tcp 167.235.198.80:6443: connect: connection refused" interval="1.6s" Jan 29 16:23:32.684243 containerd[1498]: time="2025-01-29T16:23:32.684195204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-0-e-139a7b6c18,Uid:65a5389adc32590f2f0bf8e7a7caad99,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aed00242f03908ab0f6cc176b2654755316fe32638d781d7812721bfb200466\"" Jan 29 16:23:32.706801 containerd[1498]: time="2025-01-29T16:23:32.706267040Z" level=info msg="CreateContainer within sandbox \"9aed00242f03908ab0f6cc176b2654755316fe32638d781d7812721bfb200466\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 16:23:32.722022 containerd[1498]: time="2025-01-29T16:23:32.721903751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-0-e-139a7b6c18,Uid:a6a1b314d60f03ba7f43df755cbf4185,Namespace:kube-system,Attempt:0,} returns sandbox id \"20ee849a4cd440f97b06077ca301f79821a5e1002421745209d8bed915b62cc2\"" Jan 29 16:23:32.731009 containerd[1498]: time="2025-01-29T16:23:32.730961095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-0-e-139a7b6c18,Uid:6db7b489acb9c0d50ae13e6fea9b04cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"08732233af9dacedbdb85ad5f549581f3cef3562528dae96f62ef0a983d10449\"" Jan 29 16:23:32.734840 containerd[1498]: time="2025-01-29T16:23:32.734743202Z" level=info msg="CreateContainer within sandbox \"20ee849a4cd440f97b06077ca301f79821a5e1002421745209d8bed915b62cc2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 16:23:32.746561 containerd[1498]: time="2025-01-29T16:23:32.746175563Z" level=info msg="CreateContainer within sandbox \"9aed00242f03908ab0f6cc176b2654755316fe32638d781d7812721bfb200466\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f6ba1c90e7b53ec9d1033da3f0273e33f522830944508bb8b945fe4d47b6a609\"" Jan 29 16:23:32.752130 containerd[1498]: time="2025-01-29T16:23:32.751752202Z" level=info msg="StartContainer for \"f6ba1c90e7b53ec9d1033da3f0273e33f522830944508bb8b945fe4d47b6a609\"" Jan 29 16:23:32.752317 containerd[1498]: time="2025-01-29T16:23:32.752078485Z" level=info msg="CreateContainer within sandbox \"08732233af9dacedbdb85ad5f549581f3cef3562528dae96f62ef0a983d10449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 16:23:32.774405 kubelet[2645]: I0129 16:23:32.774161 2645 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:32.776155 kubelet[2645]: E0129 16:23:32.776084 2645 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://167.235.198.80:6443/api/v1/nodes\": dial tcp 167.235.198.80:6443: connect: connection refused" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:32.779091 containerd[1498]: time="2025-01-29T16:23:32.778931354Z" level=info msg="CreateContainer within sandbox \"20ee849a4cd440f97b06077ca301f79821a5e1002421745209d8bed915b62cc2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a025af42bdb683a44c633cea66e6a1bdd7e7b8b47d8242e7a84750ddbb36b277\"" Jan 29 16:23:32.779923 containerd[1498]: time="2025-01-29T16:23:32.779748200Z" level=info msg="StartContainer for \"a025af42bdb683a44c633cea66e6a1bdd7e7b8b47d8242e7a84750ddbb36b277\"" Jan 29 16:23:32.799997 systemd[1]: Started cri-containerd-f6ba1c90e7b53ec9d1033da3f0273e33f522830944508bb8b945fe4d47b6a609.scope - libcontainer container f6ba1c90e7b53ec9d1033da3f0273e33f522830944508bb8b945fe4d47b6a609. Jan 29 16:23:32.802021 containerd[1498]: time="2025-01-29T16:23:32.801932157Z" level=info msg="CreateContainer within sandbox \"08732233af9dacedbdb85ad5f549581f3cef3562528dae96f62ef0a983d10449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1f3864bd043d8065319611e0a2661ee8ff645a481b363c5977421a08ec5d29ec\"" Jan 29 16:23:32.804460 containerd[1498]: time="2025-01-29T16:23:32.803912491Z" level=info msg="StartContainer for \"1f3864bd043d8065319611e0a2661ee8ff645a481b363c5977421a08ec5d29ec\"" Jan 29 16:23:32.843426 systemd[1]: Started cri-containerd-a025af42bdb683a44c633cea66e6a1bdd7e7b8b47d8242e7a84750ddbb36b277.scope - libcontainer container a025af42bdb683a44c633cea66e6a1bdd7e7b8b47d8242e7a84750ddbb36b277. Jan 29 16:23:32.880314 systemd[1]: Started cri-containerd-1f3864bd043d8065319611e0a2661ee8ff645a481b363c5977421a08ec5d29ec.scope - libcontainer container 1f3864bd043d8065319611e0a2661ee8ff645a481b363c5977421a08ec5d29ec. Jan 29 16:23:32.892404 containerd[1498]: time="2025-01-29T16:23:32.892133275Z" level=info msg="StartContainer for \"f6ba1c90e7b53ec9d1033da3f0273e33f522830944508bb8b945fe4d47b6a609\" returns successfully" Jan 29 16:23:32.897516 kubelet[2645]: W0129 16:23:32.897404 2645 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://167.235.198.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:32.897516 kubelet[2645]: E0129 16:23:32.897495 2645 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://167.235.198.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 167.235.198.80:6443: connect: connection refused Jan 29 16:23:32.957213 containerd[1498]: time="2025-01-29T16:23:32.957139855Z" level=info msg="StartContainer for \"a025af42bdb683a44c633cea66e6a1bdd7e7b8b47d8242e7a84750ddbb36b277\" returns successfully" Jan 29 16:23:32.977844 containerd[1498]: time="2025-01-29T16:23:32.977767561Z" level=info msg="StartContainer for \"1f3864bd043d8065319611e0a2661ee8ff645a481b363c5977421a08ec5d29ec\" returns successfully" Jan 29 16:23:34.380434 kubelet[2645]: I0129 16:23:34.380397 2645 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:35.651228 kubelet[2645]: E0129 16:23:35.651157 2645 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-0-e-139a7b6c18\" not found" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:35.720515 kubelet[2645]: I0129 16:23:35.720463 2645 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:36.237273 kubelet[2645]: I0129 16:23:36.236982 2645 apiserver.go:52] "Watching apiserver" Jan 29 16:23:36.256867 kubelet[2645]: I0129 16:23:36.256798 2645 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:23:38.180920 systemd[1]: Reload requested from client PID 2921 ('systemctl') (unit session-7.scope)... Jan 29 16:23:38.180935 systemd[1]: Reloading... Jan 29 16:23:38.320856 zram_generator::config[2978]: No configuration found. Jan 29 16:23:38.424590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 16:23:38.550169 systemd[1]: Reloading finished in 368 ms. Jan 29 16:23:38.575206 kubelet[2645]: E0129 16:23:38.574858 2645 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4230-0-0-e-139a7b6c18.181f366a44d5efa7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-0-e-139a7b6c18,UID:ci-4230-0-0-e-139a7b6c18,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-e-139a7b6c18,},FirstTimestamp:2025-01-29 16:23:31.237605287 +0000 UTC m=+0.999622000,LastTimestamp:2025-01-29 16:23:31.237605287 +0000 UTC m=+0.999622000,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-e-139a7b6c18,}" Jan 29 16:23:38.575382 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:38.590224 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 16:23:38.590954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:38.591182 systemd[1]: kubelet.service: Consumed 1.566s CPU time, 115.6M memory peak. Jan 29 16:23:38.605269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 16:23:38.757154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 16:23:38.781279 (kubelet)[3011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 16:23:38.856442 kubelet[3011]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:23:38.856442 kubelet[3011]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 16:23:38.856442 kubelet[3011]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 16:23:38.858417 kubelet[3011]: I0129 16:23:38.856235 3011 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 16:23:38.869848 kubelet[3011]: I0129 16:23:38.869012 3011 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 16:23:38.869848 kubelet[3011]: I0129 16:23:38.869052 3011 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 16:23:38.869848 kubelet[3011]: I0129 16:23:38.869382 3011 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 16:23:38.874005 kubelet[3011]: I0129 16:23:38.873966 3011 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 16:23:38.876839 kubelet[3011]: I0129 16:23:38.876441 3011 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 16:23:38.885950 kubelet[3011]: I0129 16:23:38.885915 3011 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 16:23:38.886166 kubelet[3011]: I0129 16:23:38.886128 3011 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 16:23:38.886441 kubelet[3011]: I0129 16:23:38.886165 3011 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-0-e-139a7b6c18","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 16:23:38.886441 kubelet[3011]: I0129 16:23:38.886412 3011 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 16:23:38.886441 kubelet[3011]: I0129 16:23:38.886423 3011 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 16:23:38.886613 kubelet[3011]: I0129 16:23:38.886461 3011 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:23:38.886613 kubelet[3011]: I0129 16:23:38.886571 3011 kubelet.go:400] "Attempting to sync node with API server" Jan 29 16:23:38.886613 kubelet[3011]: I0129 16:23:38.886584 3011 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 16:23:38.886613 kubelet[3011]: I0129 16:23:38.886613 3011 kubelet.go:312] "Adding apiserver pod source" Jan 29 16:23:38.889068 kubelet[3011]: I0129 16:23:38.886632 3011 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 16:23:38.889444 kubelet[3011]: I0129 16:23:38.889398 3011 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 16:23:38.889645 kubelet[3011]: I0129 16:23:38.889626 3011 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 16:23:38.890222 kubelet[3011]: I0129 16:23:38.890193 3011 server.go:1264] "Started kubelet" Jan 29 16:23:38.897252 kubelet[3011]: I0129 16:23:38.897210 3011 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 16:23:38.899615 kubelet[3011]: E0129 16:23:38.899585 3011 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 16:23:38.906950 kubelet[3011]: I0129 16:23:38.906888 3011 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 16:23:38.910000 kubelet[3011]: I0129 16:23:38.909939 3011 server.go:455] "Adding debug handlers to kubelet server" Jan 29 16:23:38.912597 kubelet[3011]: I0129 16:23:38.912516 3011 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 16:23:38.912777 kubelet[3011]: I0129 16:23:38.912756 3011 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 16:23:38.914391 kubelet[3011]: I0129 16:23:38.914096 3011 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 16:23:38.915906 kubelet[3011]: I0129 16:23:38.915791 3011 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 16:23:38.915906 kubelet[3011]: I0129 16:23:38.915870 3011 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 16:23:38.916376 kubelet[3011]: I0129 16:23:38.915986 3011 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 16:23:38.916376 kubelet[3011]: E0129 16:23:38.916032 3011 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 16:23:38.918929 kubelet[3011]: I0129 16:23:38.918591 3011 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 16:23:38.919926 kubelet[3011]: I0129 16:23:38.919111 3011 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 16:23:38.923001 kubelet[3011]: I0129 16:23:38.922961 3011 reconciler.go:26] "Reconciler: start to sync state" Jan 29 16:23:38.935220 kubelet[3011]: I0129 16:23:38.935171 3011 factory.go:221] Registration of the systemd container factory successfully Jan 29 16:23:38.935555 kubelet[3011]: I0129 16:23:38.935532 3011 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 16:23:38.940165 kubelet[3011]: I0129 16:23:38.940130 3011 factory.go:221] Registration of the containerd container factory successfully Jan 29 16:23:39.009000 kubelet[3011]: I0129 16:23:39.008918 3011 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 16:23:39.009000 kubelet[3011]: I0129 16:23:39.009002 3011 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 16:23:39.009188 kubelet[3011]: I0129 16:23:39.009038 3011 state_mem.go:36] "Initialized new in-memory state store" Jan 29 16:23:39.009296 kubelet[3011]: I0129 16:23:39.009269 3011 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 16:23:39.009339 kubelet[3011]: I0129 16:23:39.009293 3011 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 16:23:39.009339 kubelet[3011]: I0129 16:23:39.009330 3011 policy_none.go:49] "None policy: Start" Jan 29 16:23:39.010155 kubelet[3011]: I0129 16:23:39.010132 3011 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 16:23:39.010263 kubelet[3011]: I0129 16:23:39.010168 3011 state_mem.go:35] "Initializing new in-memory state store" Jan 29 16:23:39.010396 kubelet[3011]: I0129 16:23:39.010374 3011 state_mem.go:75] "Updated machine memory state" Jan 29 16:23:39.016307 kubelet[3011]: E0129 16:23:39.016269 3011 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 16:23:39.018175 kubelet[3011]: I0129 16:23:39.017473 3011 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 16:23:39.018175 kubelet[3011]: I0129 16:23:39.017752 3011 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 16:23:39.018175 kubelet[3011]: I0129 16:23:39.017946 3011 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 16:23:39.031046 kubelet[3011]: I0129 16:23:39.030091 3011 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.047867 kubelet[3011]: I0129 16:23:39.047831 3011 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.048157 kubelet[3011]: I0129 16:23:39.048142 3011 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.219016 kubelet[3011]: I0129 16:23:39.217621 3011 topology_manager.go:215] "Topology Admit Handler" podUID="65a5389adc32590f2f0bf8e7a7caad99" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.219016 kubelet[3011]: I0129 16:23:39.217859 3011 topology_manager.go:215] "Topology Admit Handler" podUID="6db7b489acb9c0d50ae13e6fea9b04cb" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.219016 kubelet[3011]: I0129 16:23:39.217916 3011 topology_manager.go:215] "Topology Admit Handler" podUID="a6a1b314d60f03ba7f43df755cbf4185" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.228033 kubelet[3011]: I0129 16:23:39.227978 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6db7b489acb9c0d50ae13e6fea9b04cb-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-0-e-139a7b6c18\" (UID: \"6db7b489acb9c0d50ae13e6fea9b04cb\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.228375 kubelet[3011]: I0129 16:23:39.228331 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6db7b489acb9c0d50ae13e6fea9b04cb-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-0-e-139a7b6c18\" (UID: \"6db7b489acb9c0d50ae13e6fea9b04cb\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.228535 kubelet[3011]: I0129 16:23:39.228490 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6db7b489acb9c0d50ae13e6fea9b04cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-0-e-139a7b6c18\" (UID: \"6db7b489acb9c0d50ae13e6fea9b04cb\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.228636 kubelet[3011]: I0129 16:23:39.228625 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6a1b314d60f03ba7f43df755cbf4185-kubeconfig\") pod \"kube-scheduler-ci-4230-0-0-e-139a7b6c18\" (UID: \"a6a1b314d60f03ba7f43df755cbf4185\") " pod="kube-system/kube-scheduler-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.228769 kubelet[3011]: I0129 16:23:39.228728 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/65a5389adc32590f2f0bf8e7a7caad99-k8s-certs\") pod \"kube-apiserver-ci-4230-0-0-e-139a7b6c18\" (UID: \"65a5389adc32590f2f0bf8e7a7caad99\") " pod="kube-system/kube-apiserver-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.228871 kubelet[3011]: I0129 16:23:39.228859 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/65a5389adc32590f2f0bf8e7a7caad99-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-0-e-139a7b6c18\" (UID: \"65a5389adc32590f2f0bf8e7a7caad99\") " pod="kube-system/kube-apiserver-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.229017 kubelet[3011]: I0129 16:23:39.228973 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6db7b489acb9c0d50ae13e6fea9b04cb-ca-certs\") pod \"kube-controller-manager-ci-4230-0-0-e-139a7b6c18\" (UID: \"6db7b489acb9c0d50ae13e6fea9b04cb\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.229129 kubelet[3011]: I0129 16:23:39.229117 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6db7b489acb9c0d50ae13e6fea9b04cb-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-0-e-139a7b6c18\" (UID: \"6db7b489acb9c0d50ae13e6fea9b04cb\") " pod="kube-system/kube-controller-manager-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.229281 kubelet[3011]: I0129 16:23:39.229221 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/65a5389adc32590f2f0bf8e7a7caad99-ca-certs\") pod \"kube-apiserver-ci-4230-0-0-e-139a7b6c18\" (UID: \"65a5389adc32590f2f0bf8e7a7caad99\") " pod="kube-system/kube-apiserver-ci-4230-0-0-e-139a7b6c18" Jan 29 16:23:39.888080 kubelet[3011]: I0129 16:23:39.887937 3011 apiserver.go:52] "Watching apiserver" Jan 29 16:23:39.920097 kubelet[3011]: I0129 16:23:39.920034 3011 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 16:23:40.018709 kubelet[3011]: I0129 16:23:40.018529 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-0-e-139a7b6c18" podStartSLOduration=1.018513249 podStartE2EDuration="1.018513249s" podCreationTimestamp="2025-01-29 16:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:23:40.016748505 +0000 UTC m=+1.226173536" watchObservedRunningTime="2025-01-29 16:23:40.018513249 +0000 UTC m=+1.227938320" Jan 29 16:23:40.044276 kubelet[3011]: I0129 16:23:40.044124 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-0-e-139a7b6c18" podStartSLOduration=1.044105756 podStartE2EDuration="1.044105756s" podCreationTimestamp="2025-01-29 16:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:23:40.031238661 +0000 UTC m=+1.240663692" watchObservedRunningTime="2025-01-29 16:23:40.044105756 +0000 UTC m=+1.253530907" Jan 29 16:23:40.044276 kubelet[3011]: I0129 16:23:40.044200 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-0-e-139a7b6c18" podStartSLOduration=1.044195197 podStartE2EDuration="1.044195197s" podCreationTimestamp="2025-01-29 16:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:23:40.043944194 +0000 UTC m=+1.253369265" watchObservedRunningTime="2025-01-29 16:23:40.044195197 +0000 UTC m=+1.253620268" Jan 29 16:23:44.266879 sudo[2021]: pam_unix(sudo:session): session closed for user root Jan 29 16:23:44.425427 sshd[2020]: Connection closed by 139.178.68.195 port 41402 Jan 29 16:23:44.428167 sshd-session[2018]: pam_unix(sshd:session): session closed for user core Jan 29 16:23:44.433097 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. Jan 29 16:23:44.433424 systemd[1]: sshd@32-167.235.198.80:22-139.178.68.195:41402.service: Deactivated successfully. Jan 29 16:23:44.437107 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 16:23:44.437649 systemd[1]: session-7.scope: Consumed 7.944s CPU time, 259.9M memory peak. Jan 29 16:23:44.441003 systemd-logind[1475]: Removed session 7. Jan 29 16:23:54.853856 kubelet[3011]: I0129 16:23:54.853097 3011 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 16:23:54.854296 containerd[1498]: time="2025-01-29T16:23:54.853491361Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 16:23:54.857152 kubelet[3011]: I0129 16:23:54.854778 3011 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 16:23:55.467535 kubelet[3011]: I0129 16:23:55.467472 3011 topology_manager.go:215] "Topology Admit Handler" podUID="c962432d-e241-41d7-a89f-3f779c5eea96" podNamespace="kube-system" podName="kube-proxy-5fnrl" Jan 29 16:23:55.480044 systemd[1]: Created slice kubepods-besteffort-podc962432d_e241_41d7_a89f_3f779c5eea96.slice - libcontainer container kubepods-besteffort-podc962432d_e241_41d7_a89f_3f779c5eea96.slice. Jan 29 16:23:55.545351 kubelet[3011]: I0129 16:23:55.544306 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c962432d-e241-41d7-a89f-3f779c5eea96-kube-proxy\") pod \"kube-proxy-5fnrl\" (UID: \"c962432d-e241-41d7-a89f-3f779c5eea96\") " pod="kube-system/kube-proxy-5fnrl" Jan 29 16:23:55.545351 kubelet[3011]: I0129 16:23:55.544466 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c962432d-e241-41d7-a89f-3f779c5eea96-lib-modules\") pod \"kube-proxy-5fnrl\" (UID: \"c962432d-e241-41d7-a89f-3f779c5eea96\") " pod="kube-system/kube-proxy-5fnrl" Jan 29 16:23:55.545351 kubelet[3011]: I0129 16:23:55.544520 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mpq6\" (UniqueName: \"kubernetes.io/projected/c962432d-e241-41d7-a89f-3f779c5eea96-kube-api-access-7mpq6\") pod \"kube-proxy-5fnrl\" (UID: \"c962432d-e241-41d7-a89f-3f779c5eea96\") " pod="kube-system/kube-proxy-5fnrl" Jan 29 16:23:55.545351 kubelet[3011]: I0129 16:23:55.544686 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c962432d-e241-41d7-a89f-3f779c5eea96-xtables-lock\") pod \"kube-proxy-5fnrl\" (UID: \"c962432d-e241-41d7-a89f-3f779c5eea96\") " pod="kube-system/kube-proxy-5fnrl" Jan 29 16:23:55.656141 kubelet[3011]: E0129 16:23:55.655825 3011 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 16:23:55.656141 kubelet[3011]: E0129 16:23:55.655861 3011 projected.go:200] Error preparing data for projected volume kube-api-access-7mpq6 for pod kube-system/kube-proxy-5fnrl: configmap "kube-root-ca.crt" not found Jan 29 16:23:55.656141 kubelet[3011]: E0129 16:23:55.655936 3011 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c962432d-e241-41d7-a89f-3f779c5eea96-kube-api-access-7mpq6 podName:c962432d-e241-41d7-a89f-3f779c5eea96 nodeName:}" failed. No retries permitted until 2025-01-29 16:23:56.15591079 +0000 UTC m=+17.365335861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7mpq6" (UniqueName: "kubernetes.io/projected/c962432d-e241-41d7-a89f-3f779c5eea96-kube-api-access-7mpq6") pod "kube-proxy-5fnrl" (UID: "c962432d-e241-41d7-a89f-3f779c5eea96") : configmap "kube-root-ca.crt" not found Jan 29 16:23:55.853793 kubelet[3011]: I0129 16:23:55.853642 3011 topology_manager.go:215] "Topology Admit Handler" podUID="6b039cb0-80b5-44ff-80ac-d0f97ff75f32" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-vt7fd" Jan 29 16:23:55.864884 systemd[1]: Created slice kubepods-besteffort-pod6b039cb0_80b5_44ff_80ac_d0f97ff75f32.slice - libcontainer container kubepods-besteffort-pod6b039cb0_80b5_44ff_80ac_d0f97ff75f32.slice. Jan 29 16:23:55.947319 kubelet[3011]: I0129 16:23:55.947270 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6b039cb0-80b5-44ff-80ac-d0f97ff75f32-var-lib-calico\") pod \"tigera-operator-7bc55997bb-vt7fd\" (UID: \"6b039cb0-80b5-44ff-80ac-d0f97ff75f32\") " pod="tigera-operator/tigera-operator-7bc55997bb-vt7fd" Jan 29 16:23:55.947942 kubelet[3011]: I0129 16:23:55.947805 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q47jq\" (UniqueName: \"kubernetes.io/projected/6b039cb0-80b5-44ff-80ac-d0f97ff75f32-kube-api-access-q47jq\") pod \"tigera-operator-7bc55997bb-vt7fd\" (UID: \"6b039cb0-80b5-44ff-80ac-d0f97ff75f32\") " pod="tigera-operator/tigera-operator-7bc55997bb-vt7fd" Jan 29 16:23:56.169696 containerd[1498]: time="2025-01-29T16:23:56.168875162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-vt7fd,Uid:6b039cb0-80b5-44ff-80ac-d0f97ff75f32,Namespace:tigera-operator,Attempt:0,}" Jan 29 16:23:56.200077 containerd[1498]: time="2025-01-29T16:23:56.199967543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:23:56.200077 containerd[1498]: time="2025-01-29T16:23:56.200034064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:23:56.200381 containerd[1498]: time="2025-01-29T16:23:56.200052624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:56.200589 containerd[1498]: time="2025-01-29T16:23:56.200516875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:56.231076 systemd[1]: Started cri-containerd-7455dfb3385e8ff19cff9a4e0a120c519d2351ff4d7a847240bc3edd22008ac6.scope - libcontainer container 7455dfb3385e8ff19cff9a4e0a120c519d2351ff4d7a847240bc3edd22008ac6. Jan 29 16:23:56.268743 containerd[1498]: time="2025-01-29T16:23:56.268624530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-vt7fd,Uid:6b039cb0-80b5-44ff-80ac-d0f97ff75f32,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7455dfb3385e8ff19cff9a4e0a120c519d2351ff4d7a847240bc3edd22008ac6\"" Jan 29 16:23:56.272002 containerd[1498]: time="2025-01-29T16:23:56.271583437Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 16:23:56.393152 containerd[1498]: time="2025-01-29T16:23:56.392605365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5fnrl,Uid:c962432d-e241-41d7-a89f-3f779c5eea96,Namespace:kube-system,Attempt:0,}" Jan 29 16:23:56.427140 containerd[1498]: time="2025-01-29T16:23:56.426600731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:23:56.429179 containerd[1498]: time="2025-01-29T16:23:56.428194847Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:23:56.429179 containerd[1498]: time="2025-01-29T16:23:56.428251768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:56.430004 containerd[1498]: time="2025-01-29T16:23:56.429866164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:23:56.449066 systemd[1]: Started cri-containerd-687ce39e5c4af80e1b6700a3f04348ea8c6b4ee32c5cc7421882aa41622a1d93.scope - libcontainer container 687ce39e5c4af80e1b6700a3f04348ea8c6b4ee32c5cc7421882aa41622a1d93. Jan 29 16:23:56.479316 containerd[1498]: time="2025-01-29T16:23:56.479165516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5fnrl,Uid:c962432d-e241-41d7-a89f-3f779c5eea96,Namespace:kube-system,Attempt:0,} returns sandbox id \"687ce39e5c4af80e1b6700a3f04348ea8c6b4ee32c5cc7421882aa41622a1d93\"" Jan 29 16:23:56.485861 containerd[1498]: time="2025-01-29T16:23:56.485667582Z" level=info msg="CreateContainer within sandbox \"687ce39e5c4af80e1b6700a3f04348ea8c6b4ee32c5cc7421882aa41622a1d93\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 16:23:56.504425 containerd[1498]: time="2025-01-29T16:23:56.504354363Z" level=info msg="CreateContainer within sandbox \"687ce39e5c4af80e1b6700a3f04348ea8c6b4ee32c5cc7421882aa41622a1d93\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fdf3bb0bb12d0142836c95c4804cd5d490921a93e7f6cd763b644897fb7f7044\"" Jan 29 16:23:56.506625 containerd[1498]: time="2025-01-29T16:23:56.505606952Z" level=info msg="StartContainer for \"fdf3bb0bb12d0142836c95c4804cd5d490921a93e7f6cd763b644897fb7f7044\"" Jan 29 16:23:56.537051 systemd[1]: Started cri-containerd-fdf3bb0bb12d0142836c95c4804cd5d490921a93e7f6cd763b644897fb7f7044.scope - libcontainer container fdf3bb0bb12d0142836c95c4804cd5d490921a93e7f6cd763b644897fb7f7044. Jan 29 16:23:56.576101 containerd[1498]: time="2025-01-29T16:23:56.575949657Z" level=info msg="StartContainer for \"fdf3bb0bb12d0142836c95c4804cd5d490921a93e7f6cd763b644897fb7f7044\" returns successfully" Jan 29 16:23:58.133132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3908857934.mount: Deactivated successfully. Jan 29 16:23:58.421387 systemd[1]: Started sshd@36-167.235.198.80:22-103.142.199.159:35562.service - OpenSSH per-connection server daemon (103.142.199.159:35562). Jan 29 16:23:58.601555 containerd[1498]: time="2025-01-29T16:23:58.601463475Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:58.603649 containerd[1498]: time="2025-01-29T16:23:58.603557844Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 29 16:23:58.607309 containerd[1498]: time="2025-01-29T16:23:58.607178968Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:58.611185 containerd[1498]: time="2025-01-29T16:23:58.610387123Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:23:58.611901 containerd[1498]: time="2025-01-29T16:23:58.611861038Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.34023664s" Jan 29 16:23:58.611901 containerd[1498]: time="2025-01-29T16:23:58.611902279Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 29 16:23:58.615514 containerd[1498]: time="2025-01-29T16:23:58.615464162Z" level=info msg="CreateContainer within sandbox \"7455dfb3385e8ff19cff9a4e0a120c519d2351ff4d7a847240bc3edd22008ac6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 16:23:58.640590 containerd[1498]: time="2025-01-29T16:23:58.640368264Z" level=info msg="CreateContainer within sandbox \"7455dfb3385e8ff19cff9a4e0a120c519d2351ff4d7a847240bc3edd22008ac6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053\"" Jan 29 16:23:58.644646 containerd[1498]: time="2025-01-29T16:23:58.642088104Z" level=info msg="StartContainer for \"54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053\"" Jan 29 16:23:58.682214 systemd[1]: Started cri-containerd-54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053.scope - libcontainer container 54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053. Jan 29 16:23:58.717656 containerd[1498]: time="2025-01-29T16:23:58.717598589Z" level=info msg="StartContainer for \"54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053\" returns successfully" Jan 29 16:23:58.940309 kubelet[3011]: I0129 16:23:58.939523 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5fnrl" podStartSLOduration=3.939499176 podStartE2EDuration="3.939499176s" podCreationTimestamp="2025-01-29 16:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:23:57.046233636 +0000 UTC m=+18.255658707" watchObservedRunningTime="2025-01-29 16:23:58.939499176 +0000 UTC m=+20.148924287" Jan 29 16:23:59.336758 sshd[3335]: Received disconnect from 103.142.199.159 port 35562:11: Bye Bye [preauth] Jan 29 16:23:59.336758 sshd[3335]: Disconnected from authenticating user root 103.142.199.159 port 35562 [preauth] Jan 29 16:23:59.339252 systemd[1]: sshd@36-167.235.198.80:22-103.142.199.159:35562.service: Deactivated successfully. Jan 29 16:24:03.175570 kubelet[3011]: I0129 16:24:03.175406 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-vt7fd" podStartSLOduration=5.8326201730000005 podStartE2EDuration="8.175382671s" podCreationTimestamp="2025-01-29 16:23:55 +0000 UTC" firstStartedPulling="2025-01-29 16:23:56.270864941 +0000 UTC m=+17.480290012" lastFinishedPulling="2025-01-29 16:23:58.613627439 +0000 UTC m=+19.823052510" observedRunningTime="2025-01-29 16:23:59.049522766 +0000 UTC m=+20.258947877" watchObservedRunningTime="2025-01-29 16:24:03.175382671 +0000 UTC m=+24.384807742" Jan 29 16:24:03.176214 kubelet[3011]: I0129 16:24:03.175645 3011 topology_manager.go:215] "Topology Admit Handler" podUID="eb63471a-0f54-4c33-9cbb-f875a6517b26" podNamespace="calico-system" podName="calico-typha-7f7988d9dc-cjt4t" Jan 29 16:24:03.187270 systemd[1]: Created slice kubepods-besteffort-podeb63471a_0f54_4c33_9cbb_f875a6517b26.slice - libcontainer container kubepods-besteffort-podeb63471a_0f54_4c33_9cbb_f875a6517b26.slice. Jan 29 16:24:03.198944 kubelet[3011]: I0129 16:24:03.198300 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8j56\" (UniqueName: \"kubernetes.io/projected/eb63471a-0f54-4c33-9cbb-f875a6517b26-kube-api-access-n8j56\") pod \"calico-typha-7f7988d9dc-cjt4t\" (UID: \"eb63471a-0f54-4c33-9cbb-f875a6517b26\") " pod="calico-system/calico-typha-7f7988d9dc-cjt4t" Jan 29 16:24:03.198944 kubelet[3011]: I0129 16:24:03.198351 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb63471a-0f54-4c33-9cbb-f875a6517b26-tigera-ca-bundle\") pod \"calico-typha-7f7988d9dc-cjt4t\" (UID: \"eb63471a-0f54-4c33-9cbb-f875a6517b26\") " pod="calico-system/calico-typha-7f7988d9dc-cjt4t" Jan 29 16:24:03.198944 kubelet[3011]: I0129 16:24:03.198374 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eb63471a-0f54-4c33-9cbb-f875a6517b26-typha-certs\") pod \"calico-typha-7f7988d9dc-cjt4t\" (UID: \"eb63471a-0f54-4c33-9cbb-f875a6517b26\") " pod="calico-system/calico-typha-7f7988d9dc-cjt4t" Jan 29 16:24:03.299102 kubelet[3011]: I0129 16:24:03.298528 3011 topology_manager.go:215] "Topology Admit Handler" podUID="16e2b5aa-a6df-4d45-846a-1882ef6556bc" podNamespace="calico-system" podName="calico-node-6mswg" Jan 29 16:24:03.320374 systemd[1]: Created slice kubepods-besteffort-pod16e2b5aa_a6df_4d45_846a_1882ef6556bc.slice - libcontainer container kubepods-besteffort-pod16e2b5aa_a6df_4d45_846a_1882ef6556bc.slice. Jan 29 16:24:03.399127 kubelet[3011]: I0129 16:24:03.399074 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/16e2b5aa-a6df-4d45-846a-1882ef6556bc-policysync\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.399313 kubelet[3011]: I0129 16:24:03.399151 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16e2b5aa-a6df-4d45-846a-1882ef6556bc-tigera-ca-bundle\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.399313 kubelet[3011]: I0129 16:24:03.399191 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/16e2b5aa-a6df-4d45-846a-1882ef6556bc-var-run-calico\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.399313 kubelet[3011]: I0129 16:24:03.399209 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/16e2b5aa-a6df-4d45-846a-1882ef6556bc-node-certs\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.399688 kubelet[3011]: I0129 16:24:03.399484 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/16e2b5aa-a6df-4d45-846a-1882ef6556bc-cni-net-dir\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.399688 kubelet[3011]: I0129 16:24:03.399540 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16e2b5aa-a6df-4d45-846a-1882ef6556bc-xtables-lock\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.399688 kubelet[3011]: I0129 16:24:03.399561 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq5bt\" (UniqueName: \"kubernetes.io/projected/16e2b5aa-a6df-4d45-846a-1882ef6556bc-kube-api-access-vq5bt\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.399688 kubelet[3011]: I0129 16:24:03.399602 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/16e2b5aa-a6df-4d45-846a-1882ef6556bc-cni-bin-dir\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.399688 kubelet[3011]: I0129 16:24:03.399619 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/16e2b5aa-a6df-4d45-846a-1882ef6556bc-cni-log-dir\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.400689 kubelet[3011]: I0129 16:24:03.399668 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16e2b5aa-a6df-4d45-846a-1882ef6556bc-lib-modules\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.400689 kubelet[3011]: I0129 16:24:03.399688 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/16e2b5aa-a6df-4d45-846a-1882ef6556bc-var-lib-calico\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.400689 kubelet[3011]: I0129 16:24:03.399705 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/16e2b5aa-a6df-4d45-846a-1882ef6556bc-flexvol-driver-host\") pod \"calico-node-6mswg\" (UID: \"16e2b5aa-a6df-4d45-846a-1882ef6556bc\") " pod="calico-system/calico-node-6mswg" Jan 29 16:24:03.420485 kubelet[3011]: I0129 16:24:03.420413 3011 topology_manager.go:215] "Topology Admit Handler" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" podNamespace="calico-system" podName="csi-node-driver-z22bk" Jan 29 16:24:03.421069 kubelet[3011]: E0129 16:24:03.421026 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22bk" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" Jan 29 16:24:03.493994 containerd[1498]: time="2025-01-29T16:24:03.493881189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7988d9dc-cjt4t,Uid:eb63471a-0f54-4c33-9cbb-f875a6517b26,Namespace:calico-system,Attempt:0,}" Jan 29 16:24:03.503927 kubelet[3011]: I0129 16:24:03.502614 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bec95826-331e-47ab-a0cc-d4c3b56446fd-socket-dir\") pod \"csi-node-driver-z22bk\" (UID: \"bec95826-331e-47ab-a0cc-d4c3b56446fd\") " pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:03.507846 kubelet[3011]: I0129 16:24:03.507652 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd8ww\" (UniqueName: \"kubernetes.io/projected/bec95826-331e-47ab-a0cc-d4c3b56446fd-kube-api-access-pd8ww\") pod \"csi-node-driver-z22bk\" (UID: \"bec95826-331e-47ab-a0cc-d4c3b56446fd\") " pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:03.511070 kubelet[3011]: E0129 16:24:03.509195 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.511070 kubelet[3011]: W0129 16:24:03.511021 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.511655 kubelet[3011]: E0129 16:24:03.511497 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.518009 kubelet[3011]: E0129 16:24:03.517231 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.518009 kubelet[3011]: W0129 16:24:03.517267 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.518009 kubelet[3011]: E0129 16:24:03.517301 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.518009 kubelet[3011]: I0129 16:24:03.517340 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bec95826-331e-47ab-a0cc-d4c3b56446fd-kubelet-dir\") pod \"csi-node-driver-z22bk\" (UID: \"bec95826-331e-47ab-a0cc-d4c3b56446fd\") " pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:03.523948 kubelet[3011]: E0129 16:24:03.523913 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.524334 kubelet[3011]: W0129 16:24:03.524123 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.524334 kubelet[3011]: E0129 16:24:03.524203 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.524854 kubelet[3011]: E0129 16:24:03.524831 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.525059 kubelet[3011]: W0129 16:24:03.524911 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.526001 kubelet[3011]: E0129 16:24:03.525869 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.527049 kubelet[3011]: E0129 16:24:03.526787 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.527049 kubelet[3011]: W0129 16:24:03.526808 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.527604 kubelet[3011]: E0129 16:24:03.527369 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.528631 kubelet[3011]: E0129 16:24:03.528444 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.528631 kubelet[3011]: W0129 16:24:03.528507 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.530195 kubelet[3011]: E0129 16:24:03.528993 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.530195 kubelet[3011]: E0129 16:24:03.529240 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.530195 kubelet[3011]: W0129 16:24:03.529254 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.530195 kubelet[3011]: E0129 16:24:03.529353 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.530195 kubelet[3011]: I0129 16:24:03.529393 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bec95826-331e-47ab-a0cc-d4c3b56446fd-varrun\") pod \"csi-node-driver-z22bk\" (UID: \"bec95826-331e-47ab-a0cc-d4c3b56446fd\") " pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:03.530195 kubelet[3011]: E0129 16:24:03.530058 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.530195 kubelet[3011]: W0129 16:24:03.530084 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.530195 kubelet[3011]: E0129 16:24:03.530162 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.531138 kubelet[3011]: E0129 16:24:03.530856 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.531138 kubelet[3011]: W0129 16:24:03.530873 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.531138 kubelet[3011]: E0129 16:24:03.530891 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.531670 kubelet[3011]: E0129 16:24:03.531651 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.531823 kubelet[3011]: W0129 16:24:03.531767 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.531823 kubelet[3011]: E0129 16:24:03.531798 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.532371 kubelet[3011]: E0129 16:24:03.532270 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.532371 kubelet[3011]: W0129 16:24:03.532284 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.532371 kubelet[3011]: E0129 16:24:03.532311 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.532371 kubelet[3011]: I0129 16:24:03.532343 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bec95826-331e-47ab-a0cc-d4c3b56446fd-registration-dir\") pod \"csi-node-driver-z22bk\" (UID: \"bec95826-331e-47ab-a0cc-d4c3b56446fd\") " pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:03.533028 kubelet[3011]: E0129 16:24:03.532803 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.533028 kubelet[3011]: W0129 16:24:03.532886 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.533028 kubelet[3011]: E0129 16:24:03.532899 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.533918 kubelet[3011]: E0129 16:24:03.533747 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.533918 kubelet[3011]: W0129 16:24:03.533768 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.533918 kubelet[3011]: E0129 16:24:03.533798 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.534382 kubelet[3011]: E0129 16:24:03.534204 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.534382 kubelet[3011]: W0129 16:24:03.534227 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.534382 kubelet[3011]: E0129 16:24:03.534240 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.534837 kubelet[3011]: E0129 16:24:03.534690 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.534837 kubelet[3011]: W0129 16:24:03.534706 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.534837 kubelet[3011]: E0129 16:24:03.534726 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.535283 kubelet[3011]: E0129 16:24:03.535160 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.535283 kubelet[3011]: W0129 16:24:03.535175 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.535283 kubelet[3011]: E0129 16:24:03.535190 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.535790 kubelet[3011]: E0129 16:24:03.535695 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.535790 kubelet[3011]: W0129 16:24:03.535711 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.535790 kubelet[3011]: E0129 16:24:03.535725 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.562088 containerd[1498]: time="2025-01-29T16:24:03.560449829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:03.562088 containerd[1498]: time="2025-01-29T16:24:03.560560392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:03.562088 containerd[1498]: time="2025-01-29T16:24:03.560578872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:03.562088 containerd[1498]: time="2025-01-29T16:24:03.560697755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:03.578342 kubelet[3011]: E0129 16:24:03.578092 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.578342 kubelet[3011]: W0129 16:24:03.578137 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.578342 kubelet[3011]: E0129 16:24:03.578166 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.614093 systemd[1]: Started cri-containerd-5f0a7f460384ce875b141dcd51d90dd56bdd5934cada8e6fa80b47e054e8e3d9.scope - libcontainer container 5f0a7f460384ce875b141dcd51d90dd56bdd5934cada8e6fa80b47e054e8e3d9. Jan 29 16:24:03.627850 containerd[1498]: time="2025-01-29T16:24:03.627752288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6mswg,Uid:16e2b5aa-a6df-4d45-846a-1882ef6556bc,Namespace:calico-system,Attempt:0,}" Jan 29 16:24:03.634886 kubelet[3011]: E0129 16:24:03.634740 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.635416 kubelet[3011]: W0129 16:24:03.634942 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.635416 kubelet[3011]: E0129 16:24:03.634975 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.635845 kubelet[3011]: E0129 16:24:03.635642 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.636142 kubelet[3011]: W0129 16:24:03.635806 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.636142 kubelet[3011]: E0129 16:24:03.636010 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.637513 kubelet[3011]: E0129 16:24:03.637381 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.637513 kubelet[3011]: W0129 16:24:03.637405 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.637513 kubelet[3011]: E0129 16:24:03.637442 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.638290 kubelet[3011]: E0129 16:24:03.638138 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.638290 kubelet[3011]: W0129 16:24:03.638158 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.638697 kubelet[3011]: E0129 16:24:03.638190 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.639279 kubelet[3011]: E0129 16:24:03.638961 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.639279 kubelet[3011]: W0129 16:24:03.638977 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.639279 kubelet[3011]: E0129 16:24:03.639046 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.641530 kubelet[3011]: E0129 16:24:03.641128 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.641530 kubelet[3011]: W0129 16:24:03.641173 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.642442 kubelet[3011]: E0129 16:24:03.641957 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.642935 kubelet[3011]: E0129 16:24:03.642907 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.643310 kubelet[3011]: W0129 16:24:03.643099 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.643967 kubelet[3011]: E0129 16:24:03.643862 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.645308 kubelet[3011]: E0129 16:24:03.645283 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.645578 kubelet[3011]: W0129 16:24:03.645539 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.646077 kubelet[3011]: E0129 16:24:03.645893 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.646908 kubelet[3011]: E0129 16:24:03.646600 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.646908 kubelet[3011]: W0129 16:24:03.646618 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.647375 kubelet[3011]: E0129 16:24:03.647269 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.648098 kubelet[3011]: E0129 16:24:03.647787 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.648098 kubelet[3011]: W0129 16:24:03.647803 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.650634 kubelet[3011]: E0129 16:24:03.648653 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.652069 kubelet[3011]: E0129 16:24:03.651378 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.652069 kubelet[3011]: W0129 16:24:03.651405 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.652069 kubelet[3011]: E0129 16:24:03.651535 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.652396 kubelet[3011]: E0129 16:24:03.652374 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.652761 kubelet[3011]: W0129 16:24:03.652589 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.652953 kubelet[3011]: E0129 16:24:03.652927 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.653495 kubelet[3011]: E0129 16:24:03.653288 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.653495 kubelet[3011]: W0129 16:24:03.653434 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.654376 kubelet[3011]: E0129 16:24:03.654239 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.654376 kubelet[3011]: E0129 16:24:03.654303 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.654376 kubelet[3011]: W0129 16:24:03.654314 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.654587 kubelet[3011]: E0129 16:24:03.654544 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.655303 kubelet[3011]: E0129 16:24:03.655283 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.655696 kubelet[3011]: W0129 16:24:03.655551 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.655696 kubelet[3011]: E0129 16:24:03.655628 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.656883 kubelet[3011]: E0129 16:24:03.656654 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.656883 kubelet[3011]: W0129 16:24:03.656779 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.657287 kubelet[3011]: E0129 16:24:03.657028 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.657958 kubelet[3011]: E0129 16:24:03.657799 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.657958 kubelet[3011]: W0129 16:24:03.657888 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.658337 kubelet[3011]: E0129 16:24:03.658230 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.660598 kubelet[3011]: E0129 16:24:03.660336 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.660598 kubelet[3011]: W0129 16:24:03.660368 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.660598 kubelet[3011]: E0129 16:24:03.660552 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.664041 kubelet[3011]: E0129 16:24:03.663913 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.664429 kubelet[3011]: W0129 16:24:03.664383 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.664840 kubelet[3011]: E0129 16:24:03.664755 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.665492 kubelet[3011]: E0129 16:24:03.665427 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.665492 kubelet[3011]: W0129 16:24:03.665443 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.666241 kubelet[3011]: E0129 16:24:03.666180 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.667240 kubelet[3011]: E0129 16:24:03.667131 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.667826 kubelet[3011]: W0129 16:24:03.667547 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.668029 kubelet[3011]: E0129 16:24:03.668010 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.668973 kubelet[3011]: E0129 16:24:03.668744 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.668973 kubelet[3011]: W0129 16:24:03.668940 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.669578 kubelet[3011]: E0129 16:24:03.669418 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.671123 kubelet[3011]: E0129 16:24:03.670449 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.671123 kubelet[3011]: W0129 16:24:03.670534 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.671960 kubelet[3011]: E0129 16:24:03.671862 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.673686 kubelet[3011]: E0129 16:24:03.672479 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.673686 kubelet[3011]: W0129 16:24:03.673629 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.674159 kubelet[3011]: E0129 16:24:03.673877 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.674618 kubelet[3011]: E0129 16:24:03.674597 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.674944 kubelet[3011]: W0129 16:24:03.674703 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.674944 kubelet[3011]: E0129 16:24:03.674728 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.694080 containerd[1498]: time="2025-01-29T16:24:03.693220460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:03.694257 containerd[1498]: time="2025-01-29T16:24:03.694103682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:03.694257 containerd[1498]: time="2025-01-29T16:24:03.694125883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:03.696219 containerd[1498]: time="2025-01-29T16:24:03.696039611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:03.700972 kubelet[3011]: E0129 16:24:03.700867 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:03.700972 kubelet[3011]: W0129 16:24:03.700903 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:03.700972 kubelet[3011]: E0129 16:24:03.700926 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:03.716969 containerd[1498]: time="2025-01-29T16:24:03.716900778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7988d9dc-cjt4t,Uid:eb63471a-0f54-4c33-9cbb-f875a6517b26,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f0a7f460384ce875b141dcd51d90dd56bdd5934cada8e6fa80b47e054e8e3d9\"" Jan 29 16:24:03.722674 containerd[1498]: time="2025-01-29T16:24:03.722614282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 16:24:03.728117 systemd[1]: Started cri-containerd-11da643b767ec93d61201551ba1c4921476802e387af4dc86954b04124877acd.scope - libcontainer container 11da643b767ec93d61201551ba1c4921476802e387af4dc86954b04124877acd. Jan 29 16:24:03.773368 containerd[1498]: time="2025-01-29T16:24:03.773220959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6mswg,Uid:16e2b5aa-a6df-4d45-846a-1882ef6556bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"11da643b767ec93d61201551ba1c4921476802e387af4dc86954b04124877acd\"" Jan 29 16:24:04.917653 kubelet[3011]: E0129 16:24:04.917076 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22bk" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" Jan 29 16:24:05.106065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1282275961.mount: Deactivated successfully. Jan 29 16:24:06.209849 containerd[1498]: time="2025-01-29T16:24:06.209096085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:06.212640 containerd[1498]: time="2025-01-29T16:24:06.212539576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 29 16:24:06.214439 containerd[1498]: time="2025-01-29T16:24:06.214187299Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:06.220285 containerd[1498]: time="2025-01-29T16:24:06.220207017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:06.221889 containerd[1498]: time="2025-01-29T16:24:06.221254924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.498581401s" Jan 29 16:24:06.221889 containerd[1498]: time="2025-01-29T16:24:06.221304526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 29 16:24:06.225181 containerd[1498]: time="2025-01-29T16:24:06.224581851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 16:24:06.248199 containerd[1498]: time="2025-01-29T16:24:06.247994385Z" level=info msg="CreateContainer within sandbox \"5f0a7f460384ce875b141dcd51d90dd56bdd5934cada8e6fa80b47e054e8e3d9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 16:24:06.292441 containerd[1498]: time="2025-01-29T16:24:06.292166304Z" level=info msg="CreateContainer within sandbox \"5f0a7f460384ce875b141dcd51d90dd56bdd5934cada8e6fa80b47e054e8e3d9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7fc7917f9a9569aae1a7758a3ca16f410e68bf7ebc334bc63140060daead7fe9\"" Jan 29 16:24:06.293851 containerd[1498]: time="2025-01-29T16:24:06.293755905Z" level=info msg="StartContainer for \"7fc7917f9a9569aae1a7758a3ca16f410e68bf7ebc334bc63140060daead7fe9\"" Jan 29 16:24:06.339916 systemd[1]: Started cri-containerd-7fc7917f9a9569aae1a7758a3ca16f410e68bf7ebc334bc63140060daead7fe9.scope - libcontainer container 7fc7917f9a9569aae1a7758a3ca16f410e68bf7ebc334bc63140060daead7fe9. Jan 29 16:24:06.400717 containerd[1498]: time="2025-01-29T16:24:06.400658389Z" level=info msg="StartContainer for \"7fc7917f9a9569aae1a7758a3ca16f410e68bf7ebc334bc63140060daead7fe9\" returns successfully" Jan 29 16:24:06.918194 kubelet[3011]: E0129 16:24:06.918096 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22bk" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" Jan 29 16:24:07.094562 kubelet[3011]: I0129 16:24:07.092757 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f7988d9dc-cjt4t" podStartSLOduration=1.591403652 podStartE2EDuration="4.092724564s" podCreationTimestamp="2025-01-29 16:24:03 +0000 UTC" firstStartedPulling="2025-01-29 16:24:03.721979426 +0000 UTC m=+24.931404497" lastFinishedPulling="2025-01-29 16:24:06.223300258 +0000 UTC m=+27.432725409" observedRunningTime="2025-01-29 16:24:07.091781059 +0000 UTC m=+28.301206130" watchObservedRunningTime="2025-01-29 16:24:07.092724564 +0000 UTC m=+28.302149635" Jan 29 16:24:07.115213 kubelet[3011]: E0129 16:24:07.115161 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.115213 kubelet[3011]: W0129 16:24:07.115199 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.115623 kubelet[3011]: E0129 16:24:07.115231 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.116778 kubelet[3011]: E0129 16:24:07.116743 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.116778 kubelet[3011]: W0129 16:24:07.116772 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.116981 kubelet[3011]: E0129 16:24:07.116808 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.117256 kubelet[3011]: E0129 16:24:07.117234 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.117256 kubelet[3011]: W0129 16:24:07.117252 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.117256 kubelet[3011]: E0129 16:24:07.117268 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.117856 kubelet[3011]: E0129 16:24:07.117833 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.117856 kubelet[3011]: W0129 16:24:07.117852 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.117980 kubelet[3011]: E0129 16:24:07.117871 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.118259 kubelet[3011]: E0129 16:24:07.118240 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.118259 kubelet[3011]: W0129 16:24:07.118257 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.118452 kubelet[3011]: E0129 16:24:07.118271 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.118738 kubelet[3011]: E0129 16:24:07.118720 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.118738 kubelet[3011]: W0129 16:24:07.118736 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.118863 kubelet[3011]: E0129 16:24:07.118749 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.119026 kubelet[3011]: E0129 16:24:07.119011 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.119117 kubelet[3011]: W0129 16:24:07.119024 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.119173 kubelet[3011]: E0129 16:24:07.119119 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.119575 kubelet[3011]: E0129 16:24:07.119489 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.119575 kubelet[3011]: W0129 16:24:07.119574 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.119765 kubelet[3011]: E0129 16:24:07.119588 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.120512 kubelet[3011]: E0129 16:24:07.120439 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.120512 kubelet[3011]: W0129 16:24:07.120492 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.120635 kubelet[3011]: E0129 16:24:07.120528 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.121132 kubelet[3011]: E0129 16:24:07.121070 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.121675 kubelet[3011]: W0129 16:24:07.121213 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.121675 kubelet[3011]: E0129 16:24:07.121238 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.121775 kubelet[3011]: E0129 16:24:07.121749 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.121775 kubelet[3011]: W0129 16:24:07.121763 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.121874 kubelet[3011]: E0129 16:24:07.121775 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.122109 kubelet[3011]: E0129 16:24:07.122079 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.122109 kubelet[3011]: W0129 16:24:07.122105 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.122176 kubelet[3011]: E0129 16:24:07.122117 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.122952 kubelet[3011]: E0129 16:24:07.122341 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.122952 kubelet[3011]: W0129 16:24:07.122353 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.122952 kubelet[3011]: E0129 16:24:07.122367 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.122952 kubelet[3011]: E0129 16:24:07.122637 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.122952 kubelet[3011]: W0129 16:24:07.122664 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.122952 kubelet[3011]: E0129 16:24:07.122674 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.123172 kubelet[3011]: E0129 16:24:07.123056 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.123172 kubelet[3011]: W0129 16:24:07.123066 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.123172 kubelet[3011]: E0129 16:24:07.123077 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.181720 kubelet[3011]: E0129 16:24:07.181490 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.181720 kubelet[3011]: W0129 16:24:07.181539 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.181720 kubelet[3011]: E0129 16:24:07.181570 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.184372 kubelet[3011]: E0129 16:24:07.184224 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.184372 kubelet[3011]: W0129 16:24:07.184269 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.184372 kubelet[3011]: E0129 16:24:07.184303 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.184716 kubelet[3011]: E0129 16:24:07.184675 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.184716 kubelet[3011]: W0129 16:24:07.184688 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.184995 kubelet[3011]: E0129 16:24:07.184915 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.185608 kubelet[3011]: E0129 16:24:07.185582 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.185608 kubelet[3011]: W0129 16:24:07.185604 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.185847 kubelet[3011]: E0129 16:24:07.185638 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.186342 kubelet[3011]: E0129 16:24:07.186192 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.186342 kubelet[3011]: W0129 16:24:07.186222 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.186342 kubelet[3011]: E0129 16:24:07.186251 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.187191 kubelet[3011]: E0129 16:24:07.186989 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.187191 kubelet[3011]: W0129 16:24:07.187021 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.187294 kubelet[3011]: E0129 16:24:07.187267 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.187630 kubelet[3011]: E0129 16:24:07.187466 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.187630 kubelet[3011]: W0129 16:24:07.187481 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.187818 kubelet[3011]: E0129 16:24:07.187752 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.187998 kubelet[3011]: E0129 16:24:07.187935 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.187998 kubelet[3011]: W0129 16:24:07.187948 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.188156 kubelet[3011]: E0129 16:24:07.188059 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.188303 kubelet[3011]: E0129 16:24:07.188289 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.188484 kubelet[3011]: W0129 16:24:07.188355 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.188484 kubelet[3011]: E0129 16:24:07.188386 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.189089 kubelet[3011]: E0129 16:24:07.189037 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.189089 kubelet[3011]: W0129 16:24:07.189064 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.189342 kubelet[3011]: E0129 16:24:07.189191 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.190938 kubelet[3011]: E0129 16:24:07.190906 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.191772 kubelet[3011]: W0129 16:24:07.191072 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.191772 kubelet[3011]: E0129 16:24:07.191112 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.192845 kubelet[3011]: E0129 16:24:07.192714 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.192845 kubelet[3011]: W0129 16:24:07.192750 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.192845 kubelet[3011]: E0129 16:24:07.192808 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.193734 kubelet[3011]: E0129 16:24:07.193577 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.193734 kubelet[3011]: W0129 16:24:07.193606 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.193734 kubelet[3011]: E0129 16:24:07.193664 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.194661 kubelet[3011]: E0129 16:24:07.194598 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.194829 kubelet[3011]: W0129 16:24:07.194793 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.195071 kubelet[3011]: E0129 16:24:07.194926 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.195325 kubelet[3011]: E0129 16:24:07.195308 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.195566 kubelet[3011]: W0129 16:24:07.195391 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.195566 kubelet[3011]: E0129 16:24:07.195460 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.196383 kubelet[3011]: E0129 16:24:07.196126 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.196383 kubelet[3011]: W0129 16:24:07.196146 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.196383 kubelet[3011]: E0129 16:24:07.196176 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.196658 kubelet[3011]: E0129 16:24:07.196543 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.196658 kubelet[3011]: W0129 16:24:07.196566 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.197589 kubelet[3011]: E0129 16:24:07.197386 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.198150 kubelet[3011]: E0129 16:24:07.198119 3011 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 16:24:07.198150 kubelet[3011]: W0129 16:24:07.198147 3011 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 16:24:07.198229 kubelet[3011]: E0129 16:24:07.198172 3011 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 16:24:07.621778 containerd[1498]: time="2025-01-29T16:24:07.621010020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:07.622238 containerd[1498]: time="2025-01-29T16:24:07.622187611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 29 16:24:07.623066 containerd[1498]: time="2025-01-29T16:24:07.623031674Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:07.627762 containerd[1498]: time="2025-01-29T16:24:07.627708278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:07.628484 containerd[1498]: time="2025-01-29T16:24:07.628439737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.403800564s" Jan 29 16:24:07.628484 containerd[1498]: time="2025-01-29T16:24:07.628482498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 16:24:07.633725 containerd[1498]: time="2025-01-29T16:24:07.633629835Z" level=info msg="CreateContainer within sandbox \"11da643b767ec93d61201551ba1c4921476802e387af4dc86954b04124877acd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 16:24:07.655164 containerd[1498]: time="2025-01-29T16:24:07.655062763Z" level=info msg="CreateContainer within sandbox \"11da643b767ec93d61201551ba1c4921476802e387af4dc86954b04124877acd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dc136054b2386c1a61c2ac02bce6c1b4dba731f0b8fee5577498ff37aedecac6\"" Jan 29 16:24:07.658925 containerd[1498]: time="2025-01-29T16:24:07.657922359Z" level=info msg="StartContainer for \"dc136054b2386c1a61c2ac02bce6c1b4dba731f0b8fee5577498ff37aedecac6\"" Jan 29 16:24:07.710085 systemd[1]: Started cri-containerd-dc136054b2386c1a61c2ac02bce6c1b4dba731f0b8fee5577498ff37aedecac6.scope - libcontainer container dc136054b2386c1a61c2ac02bce6c1b4dba731f0b8fee5577498ff37aedecac6. Jan 29 16:24:07.749213 containerd[1498]: time="2025-01-29T16:24:07.749103698Z" level=info msg="StartContainer for \"dc136054b2386c1a61c2ac02bce6c1b4dba731f0b8fee5577498ff37aedecac6\" returns successfully" Jan 29 16:24:07.772602 systemd[1]: cri-containerd-dc136054b2386c1a61c2ac02bce6c1b4dba731f0b8fee5577498ff37aedecac6.scope: Deactivated successfully. Jan 29 16:24:07.932419 containerd[1498]: time="2025-01-29T16:24:07.932192036Z" level=info msg="shim disconnected" id=dc136054b2386c1a61c2ac02bce6c1b4dba731f0b8fee5577498ff37aedecac6 namespace=k8s.io Jan 29 16:24:07.932419 containerd[1498]: time="2025-01-29T16:24:07.932266198Z" level=warning msg="cleaning up after shim disconnected" id=dc136054b2386c1a61c2ac02bce6c1b4dba731f0b8fee5577498ff37aedecac6 namespace=k8s.io Jan 29 16:24:07.932419 containerd[1498]: time="2025-01-29T16:24:07.932276758Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:24:08.081793 kubelet[3011]: I0129 16:24:08.079406 3011 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:24:08.082279 containerd[1498]: time="2025-01-29T16:24:08.081967274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 16:24:08.232715 systemd[1]: run-containerd-runc-k8s.io-dc136054b2386c1a61c2ac02bce6c1b4dba731f0b8fee5577498ff37aedecac6-runc.q5wvkV.mount: Deactivated successfully. Jan 29 16:24:08.232867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc136054b2386c1a61c2ac02bce6c1b4dba731f0b8fee5577498ff37aedecac6-rootfs.mount: Deactivated successfully. Jan 29 16:24:08.918854 kubelet[3011]: E0129 16:24:08.917373 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22bk" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" Jan 29 16:24:10.622496 containerd[1498]: time="2025-01-29T16:24:10.621360804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:10.622496 containerd[1498]: time="2025-01-29T16:24:10.622400993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 16:24:10.623233 containerd[1498]: time="2025-01-29T16:24:10.623192854Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:10.626155 containerd[1498]: time="2025-01-29T16:24:10.626092734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:10.627252 containerd[1498]: time="2025-01-29T16:24:10.627207244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.545190209s" Jan 29 16:24:10.627252 containerd[1498]: time="2025-01-29T16:24:10.627250885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 16:24:10.632646 containerd[1498]: time="2025-01-29T16:24:10.632599392Z" level=info msg="CreateContainer within sandbox \"11da643b767ec93d61201551ba1c4921476802e387af4dc86954b04124877acd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 16:24:10.658297 containerd[1498]: time="2025-01-29T16:24:10.658222414Z" level=info msg="CreateContainer within sandbox \"11da643b767ec93d61201551ba1c4921476802e387af4dc86954b04124877acd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cf1d959524b5f984528b7f8149dff01b2268166126aa4680967f6f6ca552fd1a\"" Jan 29 16:24:10.660108 containerd[1498]: time="2025-01-29T16:24:10.660042024Z" level=info msg="StartContainer for \"cf1d959524b5f984528b7f8149dff01b2268166126aa4680967f6f6ca552fd1a\"" Jan 29 16:24:10.698458 systemd[1]: Started cri-containerd-cf1d959524b5f984528b7f8149dff01b2268166126aa4680967f6f6ca552fd1a.scope - libcontainer container cf1d959524b5f984528b7f8149dff01b2268166126aa4680967f6f6ca552fd1a. Jan 29 16:24:10.737623 containerd[1498]: time="2025-01-29T16:24:10.737522507Z" level=info msg="StartContainer for \"cf1d959524b5f984528b7f8149dff01b2268166126aa4680967f6f6ca552fd1a\" returns successfully" Jan 29 16:24:10.918114 kubelet[3011]: E0129 16:24:10.917663 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z22bk" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" Jan 29 16:24:11.535198 containerd[1498]: time="2025-01-29T16:24:11.535148625Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 16:24:11.540303 systemd[1]: cri-containerd-cf1d959524b5f984528b7f8149dff01b2268166126aa4680967f6f6ca552fd1a.scope: Deactivated successfully. Jan 29 16:24:11.540681 systemd[1]: cri-containerd-cf1d959524b5f984528b7f8149dff01b2268166126aa4680967f6f6ca552fd1a.scope: Consumed 527ms CPU time, 167M memory peak, 147.4M written to disk. Jan 29 16:24:11.560690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf1d959524b5f984528b7f8149dff01b2268166126aa4680967f6f6ca552fd1a-rootfs.mount: Deactivated successfully. Jan 29 16:24:11.628472 kubelet[3011]: I0129 16:24:11.628386 3011 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 16:24:11.674970 kubelet[3011]: I0129 16:24:11.674292 3011 topology_manager.go:215] "Topology Admit Handler" podUID="62f55e30-6d37-4cf6-83d1-f987ecffdd26" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x54mr" Jan 29 16:24:11.686006 systemd[1]: Created slice kubepods-burstable-pod62f55e30_6d37_4cf6_83d1_f987ecffdd26.slice - libcontainer container kubepods-burstable-pod62f55e30_6d37_4cf6_83d1_f987ecffdd26.slice. Jan 29 16:24:11.696298 kubelet[3011]: I0129 16:24:11.695549 3011 topology_manager.go:215] "Topology Admit Handler" podUID="c7972d1b-4410-4bcb-97d1-bf72f6d2582a" podNamespace="calico-apiserver" podName="calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:11.697318 kubelet[3011]: I0129 16:24:11.696778 3011 topology_manager.go:215] "Topology Admit Handler" podUID="f934c577-e456-48e1-b8cb-5d9754694bac" podNamespace="calico-apiserver" podName="calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:11.700872 kubelet[3011]: I0129 16:24:11.700394 3011 topology_manager.go:215] "Topology Admit Handler" podUID="bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:11.702396 kubelet[3011]: I0129 16:24:11.702049 3011 topology_manager.go:215] "Topology Admit Handler" podUID="7c160f15-f2fa-4241-9a8e-11ff1238cb71" podNamespace="calico-system" podName="calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:11.710821 systemd[1]: Created slice kubepods-besteffort-podf934c577_e456_48e1_b8cb_5d9754694bac.slice - libcontainer container kubepods-besteffort-podf934c577_e456_48e1_b8cb_5d9754694bac.slice. Jan 29 16:24:11.724168 containerd[1498]: time="2025-01-29T16:24:11.723502596Z" level=info msg="shim disconnected" id=cf1d959524b5f984528b7f8149dff01b2268166126aa4680967f6f6ca552fd1a namespace=k8s.io Jan 29 16:24:11.724168 containerd[1498]: time="2025-01-29T16:24:11.723610199Z" level=warning msg="cleaning up after shim disconnected" id=cf1d959524b5f984528b7f8149dff01b2268166126aa4680967f6f6ca552fd1a namespace=k8s.io Jan 29 16:24:11.724168 containerd[1498]: time="2025-01-29T16:24:11.723623640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:24:11.733277 systemd[1]: Created slice kubepods-besteffort-podc7972d1b_4410_4bcb_97d1_bf72f6d2582a.slice - libcontainer container kubepods-besteffort-podc7972d1b_4410_4bcb_97d1_bf72f6d2582a.slice. Jan 29 16:24:11.754763 systemd[1]: Created slice kubepods-burstable-podbb0afd5a_9d39_4388_bdc4_9dfe26dc7d10.slice - libcontainer container kubepods-burstable-podbb0afd5a_9d39_4388_bdc4_9dfe26dc7d10.slice. Jan 29 16:24:11.769302 systemd[1]: Created slice kubepods-besteffort-pod7c160f15_f2fa_4241_9a8e_11ff1238cb71.slice - libcontainer container kubepods-besteffort-pod7c160f15_f2fa_4241_9a8e_11ff1238cb71.slice. Jan 29 16:24:11.819362 kubelet[3011]: I0129 16:24:11.819218 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-696ck\" (UniqueName: \"kubernetes.io/projected/62f55e30-6d37-4cf6-83d1-f987ecffdd26-kube-api-access-696ck\") pod \"coredns-7db6d8ff4d-x54mr\" (UID: \"62f55e30-6d37-4cf6-83d1-f987ecffdd26\") " pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:11.819362 kubelet[3011]: I0129 16:24:11.819276 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48dcm\" (UniqueName: \"kubernetes.io/projected/f934c577-e456-48e1-b8cb-5d9754694bac-kube-api-access-48dcm\") pod \"calico-apiserver-5d75c7df-kdpvr\" (UID: \"f934c577-e456-48e1-b8cb-5d9754694bac\") " pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:11.819362 kubelet[3011]: I0129 16:24:11.819308 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95szw\" (UniqueName: \"kubernetes.io/projected/bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10-kube-api-access-95szw\") pod \"coredns-7db6d8ff4d-4b7lh\" (UID: \"bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10\") " pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:11.819362 kubelet[3011]: I0129 16:24:11.819331 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l299\" (UniqueName: \"kubernetes.io/projected/7c160f15-f2fa-4241-9a8e-11ff1238cb71-kube-api-access-2l299\") pod \"calico-kube-controllers-589789ffcb-dfxmb\" (UID: \"7c160f15-f2fa-4241-9a8e-11ff1238cb71\") " pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:11.820148 kubelet[3011]: I0129 16:24:11.820111 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f934c577-e456-48e1-b8cb-5d9754694bac-calico-apiserver-certs\") pod \"calico-apiserver-5d75c7df-kdpvr\" (UID: \"f934c577-e456-48e1-b8cb-5d9754694bac\") " pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:11.820248 kubelet[3011]: I0129 16:24:11.820218 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnfkj\" (UniqueName: \"kubernetes.io/projected/c7972d1b-4410-4bcb-97d1-bf72f6d2582a-kube-api-access-fnfkj\") pod \"calico-apiserver-5d75c7df-94j9d\" (UID: \"c7972d1b-4410-4bcb-97d1-bf72f6d2582a\") " pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:11.820293 kubelet[3011]: I0129 16:24:11.820274 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62f55e30-6d37-4cf6-83d1-f987ecffdd26-config-volume\") pod \"coredns-7db6d8ff4d-x54mr\" (UID: \"62f55e30-6d37-4cf6-83d1-f987ecffdd26\") " pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:11.820350 kubelet[3011]: I0129 16:24:11.820311 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c7972d1b-4410-4bcb-97d1-bf72f6d2582a-calico-apiserver-certs\") pod \"calico-apiserver-5d75c7df-94j9d\" (UID: \"c7972d1b-4410-4bcb-97d1-bf72f6d2582a\") " pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:11.820385 kubelet[3011]: I0129 16:24:11.820373 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10-config-volume\") pod \"coredns-7db6d8ff4d-4b7lh\" (UID: \"bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10\") " pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:11.820439 kubelet[3011]: I0129 16:24:11.820425 3011 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c160f15-f2fa-4241-9a8e-11ff1238cb71-tigera-ca-bundle\") pod \"calico-kube-controllers-589789ffcb-dfxmb\" (UID: \"7c160f15-f2fa-4241-9a8e-11ff1238cb71\") " pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:11.994553 containerd[1498]: time="2025-01-29T16:24:11.994496974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:0,}" Jan 29 16:24:12.030869 containerd[1498]: time="2025-01-29T16:24:12.029852760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:0,}" Jan 29 16:24:12.045917 containerd[1498]: time="2025-01-29T16:24:12.045870688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:0,}" Jan 29 16:24:12.069030 containerd[1498]: time="2025-01-29T16:24:12.068985413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:0,}" Jan 29 16:24:12.077395 containerd[1498]: time="2025-01-29T16:24:12.077261684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:0,}" Jan 29 16:24:12.108855 containerd[1498]: time="2025-01-29T16:24:12.108681082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 16:24:12.226800 containerd[1498]: time="2025-01-29T16:24:12.226641417Z" level=error msg="Failed to destroy network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.227356 containerd[1498]: time="2025-01-29T16:24:12.227201193Z" level=error msg="encountered an error cleaning up failed sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.227356 containerd[1498]: time="2025-01-29T16:24:12.227276515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.227893 kubelet[3011]: E0129 16:24:12.227786 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.228535 kubelet[3011]: E0129 16:24:12.227923 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:12.228535 kubelet[3011]: E0129 16:24:12.227943 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:12.228535 kubelet[3011]: E0129 16:24:12.227986 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" podUID="f934c577-e456-48e1-b8cb-5d9754694bac" Jan 29 16:24:12.262862 containerd[1498]: time="2025-01-29T16:24:12.262029605Z" level=error msg="Failed to destroy network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.263337 containerd[1498]: time="2025-01-29T16:24:12.263292761Z" level=error msg="encountered an error cleaning up failed sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.263495 containerd[1498]: time="2025-01-29T16:24:12.263474446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.264291 kubelet[3011]: E0129 16:24:12.264213 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.264424 kubelet[3011]: E0129 16:24:12.264389 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:12.264424 kubelet[3011]: E0129 16:24:12.264415 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:12.264505 kubelet[3011]: E0129 16:24:12.264460 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x54mr" podUID="62f55e30-6d37-4cf6-83d1-f987ecffdd26" Jan 29 16:24:12.276090 containerd[1498]: time="2025-01-29T16:24:12.276035997Z" level=error msg="Failed to destroy network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.277250 containerd[1498]: time="2025-01-29T16:24:12.277172628Z" level=error msg="encountered an error cleaning up failed sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.278963 containerd[1498]: time="2025-01-29T16:24:12.278907597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.280011 kubelet[3011]: E0129 16:24:12.279766 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.280011 kubelet[3011]: E0129 16:24:12.279877 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:12.280011 kubelet[3011]: E0129 16:24:12.279899 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:12.281381 kubelet[3011]: E0129 16:24:12.279949 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4b7lh" podUID="bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10" Jan 29 16:24:12.315468 containerd[1498]: time="2025-01-29T16:24:12.315225131Z" level=error msg="Failed to destroy network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.316027 containerd[1498]: time="2025-01-29T16:24:12.315991473Z" level=error msg="encountered an error cleaning up failed sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.316212 containerd[1498]: time="2025-01-29T16:24:12.316189078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.316700 containerd[1498]: time="2025-01-29T16:24:12.316513647Z" level=error msg="Failed to destroy network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.317078 containerd[1498]: time="2025-01-29T16:24:12.316959700Z" level=error msg="encountered an error cleaning up failed sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.317078 containerd[1498]: time="2025-01-29T16:24:12.317030662Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.317703 kubelet[3011]: E0129 16:24:12.317370 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.317703 kubelet[3011]: E0129 16:24:12.317435 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:12.317703 kubelet[3011]: E0129 16:24:12.317454 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:12.317703 kubelet[3011]: E0129 16:24:12.317370 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:12.317957 kubelet[3011]: E0129 16:24:12.317495 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" podUID="c7972d1b-4410-4bcb-97d1-bf72f6d2582a" Jan 29 16:24:12.317957 kubelet[3011]: E0129 16:24:12.317523 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:12.317957 kubelet[3011]: E0129 16:24:12.317545 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:12.318050 kubelet[3011]: E0129 16:24:12.317638 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" podUID="7c160f15-f2fa-4241-9a8e-11ff1238cb71" Jan 29 16:24:12.938775 systemd[1]: Created slice kubepods-besteffort-podbec95826_331e_47ab_a0cc_d4c3b56446fd.slice - libcontainer container kubepods-besteffort-podbec95826_331e_47ab_a0cc_d4c3b56446fd.slice. Jan 29 16:24:12.941965 containerd[1498]: time="2025-01-29T16:24:12.941757351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:0,}" Jan 29 16:24:13.015193 containerd[1498]: time="2025-01-29T16:24:13.015124964Z" level=error msg="Failed to destroy network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.016052 containerd[1498]: time="2025-01-29T16:24:13.015991789Z" level=error msg="encountered an error cleaning up failed sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.016150 containerd[1498]: time="2025-01-29T16:24:13.016069031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.017029 kubelet[3011]: E0129 16:24:13.016985 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.017666 kubelet[3011]: E0129 16:24:13.017049 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:13.017666 kubelet[3011]: E0129 16:24:13.017071 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:13.017666 kubelet[3011]: E0129 16:24:13.017117 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z22bk" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" Jan 29 16:24:13.018560 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5-shm.mount: Deactivated successfully. Jan 29 16:24:13.110349 kubelet[3011]: I0129 16:24:13.110015 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706" Jan 29 16:24:13.112362 kubelet[3011]: I0129 16:24:13.112173 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6" Jan 29 16:24:13.117104 containerd[1498]: time="2025-01-29T16:24:13.116512462Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\"" Jan 29 16:24:13.117104 containerd[1498]: time="2025-01-29T16:24:13.116760989Z" level=info msg="Ensure that sandbox 603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706 in task-service has been cleanup successfully" Jan 29 16:24:13.122460 systemd[1]: run-netns-cni\x2d9a48ee16\x2db7fd\x2d3037\x2d3633\x2d240140ac3064.mount: Deactivated successfully. Jan 29 16:24:13.123252 kubelet[3011]: I0129 16:24:13.122919 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53" Jan 29 16:24:13.125489 containerd[1498]: time="2025-01-29T16:24:13.120762782Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\"" Jan 29 16:24:13.126306 containerd[1498]: time="2025-01-29T16:24:13.122067019Z" level=info msg="TearDown network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" successfully" Jan 29 16:24:13.126306 containerd[1498]: time="2025-01-29T16:24:13.126081052Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" returns successfully" Jan 29 16:24:13.126306 containerd[1498]: time="2025-01-29T16:24:13.123907230Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\"" Jan 29 16:24:13.127175 containerd[1498]: time="2025-01-29T16:24:13.126709669Z" level=info msg="Ensure that sandbox 952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53 in task-service has been cleanup successfully" Jan 29 16:24:13.128196 containerd[1498]: time="2025-01-29T16:24:13.127769419Z" level=info msg="TearDown network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" successfully" Jan 29 16:24:13.128196 containerd[1498]: time="2025-01-29T16:24:13.128018266Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" returns successfully" Jan 29 16:24:13.130356 containerd[1498]: time="2025-01-29T16:24:13.126954796Z" level=info msg="Ensure that sandbox 610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6 in task-service has been cleanup successfully" Jan 29 16:24:13.131439 containerd[1498]: time="2025-01-29T16:24:13.131395961Z" level=info msg="TearDown network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" successfully" Jan 29 16:24:13.132209 containerd[1498]: time="2025-01-29T16:24:13.131708130Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" returns successfully" Jan 29 16:24:13.132647 containerd[1498]: time="2025-01-29T16:24:13.131335040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:1,}" Jan 29 16:24:13.133156 containerd[1498]: time="2025-01-29T16:24:13.133031048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:1,}" Jan 29 16:24:13.134408 kubelet[3011]: I0129 16:24:13.134372 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e" Jan 29 16:24:13.134834 containerd[1498]: time="2025-01-29T16:24:13.134744936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:1,}" Jan 29 16:24:13.136693 containerd[1498]: time="2025-01-29T16:24:13.136488665Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\"" Jan 29 16:24:13.137999 containerd[1498]: time="2025-01-29T16:24:13.137869184Z" level=info msg="Ensure that sandbox 1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e in task-service has been cleanup successfully" Jan 29 16:24:13.140244 containerd[1498]: time="2025-01-29T16:24:13.140202210Z" level=info msg="TearDown network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" successfully" Jan 29 16:24:13.141032 containerd[1498]: time="2025-01-29T16:24:13.141001512Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" returns successfully" Jan 29 16:24:13.143014 containerd[1498]: time="2025-01-29T16:24:13.142626398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:1,}" Jan 29 16:24:13.143147 kubelet[3011]: I0129 16:24:13.143116 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5" Jan 29 16:24:13.146025 containerd[1498]: time="2025-01-29T16:24:13.145875050Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\"" Jan 29 16:24:13.146144 containerd[1498]: time="2025-01-29T16:24:13.146076455Z" level=info msg="Ensure that sandbox 038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5 in task-service has been cleanup successfully" Jan 29 16:24:13.146659 containerd[1498]: time="2025-01-29T16:24:13.146331742Z" level=info msg="TearDown network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" successfully" Jan 29 16:24:13.146659 containerd[1498]: time="2025-01-29T16:24:13.146352063Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" returns successfully" Jan 29 16:24:13.148804 containerd[1498]: time="2025-01-29T16:24:13.147038442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:1,}" Jan 29 16:24:13.149964 kubelet[3011]: I0129 16:24:13.149643 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892" Jan 29 16:24:13.151936 containerd[1498]: time="2025-01-29T16:24:13.151187519Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\"" Jan 29 16:24:13.151936 containerd[1498]: time="2025-01-29T16:24:13.151357004Z" level=info msg="Ensure that sandbox c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892 in task-service has been cleanup successfully" Jan 29 16:24:13.152244 containerd[1498]: time="2025-01-29T16:24:13.152201308Z" level=info msg="TearDown network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" successfully" Jan 29 16:24:13.152325 containerd[1498]: time="2025-01-29T16:24:13.152311751Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" returns successfully" Jan 29 16:24:13.153279 containerd[1498]: time="2025-01-29T16:24:13.153241657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:1,}" Jan 29 16:24:13.341992 containerd[1498]: time="2025-01-29T16:24:13.341802052Z" level=error msg="Failed to destroy network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.342704 containerd[1498]: time="2025-01-29T16:24:13.342446830Z" level=error msg="encountered an error cleaning up failed sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.342704 containerd[1498]: time="2025-01-29T16:24:13.342543313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.342945 kubelet[3011]: E0129 16:24:13.342894 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.344435 kubelet[3011]: E0129 16:24:13.342962 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:13.344435 kubelet[3011]: E0129 16:24:13.343925 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:13.344435 kubelet[3011]: E0129 16:24:13.344003 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" podUID="f934c577-e456-48e1-b8cb-5d9754694bac" Jan 29 16:24:13.400464 containerd[1498]: time="2025-01-29T16:24:13.400408104Z" level=error msg="Failed to destroy network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.405803 containerd[1498]: time="2025-01-29T16:24:13.405555529Z" level=error msg="encountered an error cleaning up failed sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.405803 containerd[1498]: time="2025-01-29T16:24:13.405676932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.406693 kubelet[3011]: E0129 16:24:13.406531 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.406693 kubelet[3011]: E0129 16:24:13.406633 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:13.406693 kubelet[3011]: E0129 16:24:13.406662 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:13.408782 kubelet[3011]: E0129 16:24:13.406701 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" podUID="7c160f15-f2fa-4241-9a8e-11ff1238cb71" Jan 29 16:24:13.427081 containerd[1498]: time="2025-01-29T16:24:13.426913411Z" level=error msg="Failed to destroy network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.427531 containerd[1498]: time="2025-01-29T16:24:13.427504348Z" level=error msg="encountered an error cleaning up failed sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.427684 containerd[1498]: time="2025-01-29T16:24:13.427657672Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.428398 kubelet[3011]: E0129 16:24:13.428354 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.428513 kubelet[3011]: E0129 16:24:13.428415 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:13.428513 kubelet[3011]: E0129 16:24:13.428443 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:13.430351 kubelet[3011]: E0129 16:24:13.428625 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4b7lh" podUID="bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10" Jan 29 16:24:13.449199 containerd[1498]: time="2025-01-29T16:24:13.449013194Z" level=error msg="Failed to destroy network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.450333 containerd[1498]: time="2025-01-29T16:24:13.450172187Z" level=error msg="encountered an error cleaning up failed sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.450548 containerd[1498]: time="2025-01-29T16:24:13.450511796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.452688 kubelet[3011]: E0129 16:24:13.451149 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.452688 kubelet[3011]: E0129 16:24:13.451215 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:13.452688 kubelet[3011]: E0129 16:24:13.451236 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:13.452983 kubelet[3011]: E0129 16:24:13.451289 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" podUID="c7972d1b-4410-4bcb-97d1-bf72f6d2582a" Jan 29 16:24:13.455270 containerd[1498]: time="2025-01-29T16:24:13.455118246Z" level=error msg="Failed to destroy network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.456717 containerd[1498]: time="2025-01-29T16:24:13.456560647Z" level=error msg="encountered an error cleaning up failed sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.457022 containerd[1498]: time="2025-01-29T16:24:13.456944817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.458456 kubelet[3011]: E0129 16:24:13.458338 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.458456 kubelet[3011]: E0129 16:24:13.458448 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:13.458598 kubelet[3011]: E0129 16:24:13.458476 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:13.458598 kubelet[3011]: E0129 16:24:13.458526 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x54mr" podUID="62f55e30-6d37-4cf6-83d1-f987ecffdd26" Jan 29 16:24:13.465550 containerd[1498]: time="2025-01-29T16:24:13.465139488Z" level=error msg="Failed to destroy network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.466025 containerd[1498]: time="2025-01-29T16:24:13.465794667Z" level=error msg="encountered an error cleaning up failed sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.466025 containerd[1498]: time="2025-01-29T16:24:13.465922431Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.466241 kubelet[3011]: E0129 16:24:13.466183 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:13.466330 kubelet[3011]: E0129 16:24:13.466265 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:13.466330 kubelet[3011]: E0129 16:24:13.466290 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:13.466503 kubelet[3011]: E0129 16:24:13.466439 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z22bk" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" Jan 29 16:24:13.934578 systemd[1]: run-netns-cni\x2d75d43a6e\x2d4a1a\x2d9f12\x2df472\x2df8f99d06ef28.mount: Deactivated successfully. Jan 29 16:24:13.935027 systemd[1]: run-netns-cni\x2d424d4598\x2d1097\x2d9f8c\x2d5330\x2df614a8e47141.mount: Deactivated successfully. Jan 29 16:24:13.935076 systemd[1]: run-netns-cni\x2d2a9bd112\x2df5fa\x2df452\x2dd839\x2d0d08be2d2eba.mount: Deactivated successfully. Jan 29 16:24:13.935123 systemd[1]: run-netns-cni\x2d9c5c4883\x2d4765\x2d903a\x2deaa9\x2d5643da0d05eb.mount: Deactivated successfully. Jan 29 16:24:13.935167 systemd[1]: run-netns-cni\x2d96348c25\x2d56fe\x2d8825\x2dccbf\x2df53dad7c76be.mount: Deactivated successfully. Jan 29 16:24:14.158913 kubelet[3011]: I0129 16:24:14.158111 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303" Jan 29 16:24:14.159381 containerd[1498]: time="2025-01-29T16:24:14.159322374Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\"" Jan 29 16:24:14.160905 containerd[1498]: time="2025-01-29T16:24:14.159914911Z" level=info msg="Ensure that sandbox b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303 in task-service has been cleanup successfully" Jan 29 16:24:14.160905 containerd[1498]: time="2025-01-29T16:24:14.160193239Z" level=info msg="TearDown network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" successfully" Jan 29 16:24:14.160905 containerd[1498]: time="2025-01-29T16:24:14.160226720Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" returns successfully" Jan 29 16:24:14.161852 containerd[1498]: time="2025-01-29T16:24:14.161805165Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\"" Jan 29 16:24:14.162144 containerd[1498]: time="2025-01-29T16:24:14.162103413Z" level=info msg="TearDown network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" successfully" Jan 29 16:24:14.162144 containerd[1498]: time="2025-01-29T16:24:14.162121174Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" returns successfully" Jan 29 16:24:14.163085 containerd[1498]: time="2025-01-29T16:24:14.162848834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:2,}" Jan 29 16:24:14.164958 systemd[1]: run-netns-cni\x2db84be27f\x2dcfe7\x2d7f39\x2d4ecc\x2d2a95107693a2.mount: Deactivated successfully. Jan 29 16:24:14.172713 kubelet[3011]: I0129 16:24:14.171858 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d" Jan 29 16:24:14.173512 containerd[1498]: time="2025-01-29T16:24:14.173111526Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\"" Jan 29 16:24:14.173512 containerd[1498]: time="2025-01-29T16:24:14.173307292Z" level=info msg="Ensure that sandbox 9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d in task-service has been cleanup successfully" Jan 29 16:24:14.176360 systemd[1]: run-netns-cni\x2d95f05ee5\x2dd496\x2d20e0\x2d946b\x2d962852c15869.mount: Deactivated successfully. Jan 29 16:24:14.177216 containerd[1498]: time="2025-01-29T16:24:14.176963236Z" level=info msg="TearDown network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" successfully" Jan 29 16:24:14.177216 containerd[1498]: time="2025-01-29T16:24:14.177007557Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" returns successfully" Jan 29 16:24:14.180559 containerd[1498]: time="2025-01-29T16:24:14.178222391Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\"" Jan 29 16:24:14.180559 containerd[1498]: time="2025-01-29T16:24:14.178348875Z" level=info msg="TearDown network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" successfully" Jan 29 16:24:14.180559 containerd[1498]: time="2025-01-29T16:24:14.178360675Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" returns successfully" Jan 29 16:24:14.180559 containerd[1498]: time="2025-01-29T16:24:14.179369504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:2,}" Jan 29 16:24:14.183564 kubelet[3011]: I0129 16:24:14.183530 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd" Jan 29 16:24:14.187617 containerd[1498]: time="2025-01-29T16:24:14.185615882Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\"" Jan 29 16:24:14.187617 containerd[1498]: time="2025-01-29T16:24:14.185879329Z" level=info msg="Ensure that sandbox ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd in task-service has been cleanup successfully" Jan 29 16:24:14.187617 containerd[1498]: time="2025-01-29T16:24:14.186072855Z" level=info msg="TearDown network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" successfully" Jan 29 16:24:14.187617 containerd[1498]: time="2025-01-29T16:24:14.186088015Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" returns successfully" Jan 29 16:24:14.188570 systemd[1]: run-netns-cni\x2da3014980\x2d98b5\x2d2f19\x2d11d4\x2df543420429e3.mount: Deactivated successfully. Jan 29 16:24:14.190299 containerd[1498]: time="2025-01-29T16:24:14.190157211Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\"" Jan 29 16:24:14.190299 containerd[1498]: time="2025-01-29T16:24:14.190279894Z" level=info msg="TearDown network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" successfully" Jan 29 16:24:14.191115 containerd[1498]: time="2025-01-29T16:24:14.190322815Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" returns successfully" Jan 29 16:24:14.196308 kubelet[3011]: I0129 16:24:14.194340 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19" Jan 29 16:24:14.201791 containerd[1498]: time="2025-01-29T16:24:14.201746900Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\"" Jan 29 16:24:14.202124 containerd[1498]: time="2025-01-29T16:24:14.202098590Z" level=info msg="Ensure that sandbox 0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19 in task-service has been cleanup successfully" Jan 29 16:24:14.204079 containerd[1498]: time="2025-01-29T16:24:14.204034925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:2,}" Jan 29 16:24:14.204800 containerd[1498]: time="2025-01-29T16:24:14.204631902Z" level=info msg="TearDown network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" successfully" Jan 29 16:24:14.204800 containerd[1498]: time="2025-01-29T16:24:14.204661303Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" returns successfully" Jan 29 16:24:14.206959 containerd[1498]: time="2025-01-29T16:24:14.206913047Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\"" Jan 29 16:24:14.207568 containerd[1498]: time="2025-01-29T16:24:14.207334779Z" level=info msg="TearDown network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" successfully" Jan 29 16:24:14.207568 containerd[1498]: time="2025-01-29T16:24:14.207542865Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" returns successfully" Jan 29 16:24:14.209888 containerd[1498]: time="2025-01-29T16:24:14.209587723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:2,}" Jan 29 16:24:14.210072 kubelet[3011]: I0129 16:24:14.209583 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231" Jan 29 16:24:14.220430 containerd[1498]: time="2025-01-29T16:24:14.220371990Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\"" Jan 29 16:24:14.221901 containerd[1498]: time="2025-01-29T16:24:14.220625997Z" level=info msg="Ensure that sandbox 681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231 in task-service has been cleanup successfully" Jan 29 16:24:14.222764 containerd[1498]: time="2025-01-29T16:24:14.222727817Z" level=info msg="TearDown network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" successfully" Jan 29 16:24:14.222939 containerd[1498]: time="2025-01-29T16:24:14.222920622Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" returns successfully" Jan 29 16:24:14.228659 containerd[1498]: time="2025-01-29T16:24:14.228620224Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\"" Jan 29 16:24:14.228973 containerd[1498]: time="2025-01-29T16:24:14.228945554Z" level=info msg="TearDown network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" successfully" Jan 29 16:24:14.229445 containerd[1498]: time="2025-01-29T16:24:14.229370006Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" returns successfully" Jan 29 16:24:14.232766 kubelet[3011]: I0129 16:24:14.232725 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1" Jan 29 16:24:14.236206 containerd[1498]: time="2025-01-29T16:24:14.235452779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:2,}" Jan 29 16:24:14.237172 containerd[1498]: time="2025-01-29T16:24:14.235470499Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\"" Jan 29 16:24:14.239377 containerd[1498]: time="2025-01-29T16:24:14.238943678Z" level=info msg="Ensure that sandbox 5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1 in task-service has been cleanup successfully" Jan 29 16:24:14.240236 containerd[1498]: time="2025-01-29T16:24:14.240202354Z" level=info msg="TearDown network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" successfully" Jan 29 16:24:14.240697 containerd[1498]: time="2025-01-29T16:24:14.240586084Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" returns successfully" Jan 29 16:24:14.241580 containerd[1498]: time="2025-01-29T16:24:14.241494070Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\"" Jan 29 16:24:14.241859 containerd[1498]: time="2025-01-29T16:24:14.241840320Z" level=info msg="TearDown network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" successfully" Jan 29 16:24:14.242190 containerd[1498]: time="2025-01-29T16:24:14.241985604Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" returns successfully" Jan 29 16:24:14.246101 containerd[1498]: time="2025-01-29T16:24:14.246049560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:2,}" Jan 29 16:24:14.444303 containerd[1498]: time="2025-01-29T16:24:14.444249595Z" level=error msg="Failed to destroy network for sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.446904 containerd[1498]: time="2025-01-29T16:24:14.446760547Z" level=error msg="encountered an error cleaning up failed sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.447337 containerd[1498]: time="2025-01-29T16:24:14.447098676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.447471 kubelet[3011]: E0129 16:24:14.447364 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.447471 kubelet[3011]: E0129 16:24:14.447427 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:14.447471 kubelet[3011]: E0129 16:24:14.447450 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:14.448268 kubelet[3011]: E0129 16:24:14.447504 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" podUID="7c160f15-f2fa-4241-9a8e-11ff1238cb71" Jan 29 16:24:14.499508 containerd[1498]: time="2025-01-29T16:24:14.499459485Z" level=error msg="Failed to destroy network for sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.501665 containerd[1498]: time="2025-01-29T16:24:14.501474462Z" level=error msg="encountered an error cleaning up failed sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.502898 containerd[1498]: time="2025-01-29T16:24:14.502840621Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.504142 kubelet[3011]: E0129 16:24:14.503564 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.504142 kubelet[3011]: E0129 16:24:14.503690 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:14.504142 kubelet[3011]: E0129 16:24:14.503712 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:14.504344 kubelet[3011]: E0129 16:24:14.503762 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z22bk" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" Jan 29 16:24:14.546950 containerd[1498]: time="2025-01-29T16:24:14.546901234Z" level=error msg="Failed to destroy network for sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.547278 containerd[1498]: time="2025-01-29T16:24:14.547253204Z" level=error msg="Failed to destroy network for sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.548087 containerd[1498]: time="2025-01-29T16:24:14.548040666Z" level=error msg="encountered an error cleaning up failed sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.548484 containerd[1498]: time="2025-01-29T16:24:14.548256232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.548757 containerd[1498]: time="2025-01-29T16:24:14.548465118Z" level=error msg="encountered an error cleaning up failed sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.548913 containerd[1498]: time="2025-01-29T16:24:14.548892771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.549767 kubelet[3011]: E0129 16:24:14.549231 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.549767 kubelet[3011]: E0129 16:24:14.549292 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:14.549767 kubelet[3011]: E0129 16:24:14.549314 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:14.549970 kubelet[3011]: E0129 16:24:14.549353 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" podUID="c7972d1b-4410-4bcb-97d1-bf72f6d2582a" Jan 29 16:24:14.549970 kubelet[3011]: E0129 16:24:14.549399 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.549970 kubelet[3011]: E0129 16:24:14.549415 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:14.550068 kubelet[3011]: E0129 16:24:14.549427 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:14.550068 kubelet[3011]: E0129 16:24:14.549451 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" podUID="f934c577-e456-48e1-b8cb-5d9754694bac" Jan 29 16:24:14.565471 containerd[1498]: time="2025-01-29T16:24:14.565039630Z" level=error msg="Failed to destroy network for sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.566919 containerd[1498]: time="2025-01-29T16:24:14.566870522Z" level=error msg="encountered an error cleaning up failed sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.567210 containerd[1498]: time="2025-01-29T16:24:14.567103168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.567807 kubelet[3011]: E0129 16:24:14.567530 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.567807 kubelet[3011]: E0129 16:24:14.567591 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:14.567807 kubelet[3011]: E0129 16:24:14.567628 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:14.568008 kubelet[3011]: E0129 16:24:14.567695 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4b7lh" podUID="bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10" Jan 29 16:24:14.576304 containerd[1498]: time="2025-01-29T16:24:14.575922779Z" level=error msg="Failed to destroy network for sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.576845 containerd[1498]: time="2025-01-29T16:24:14.576733402Z" level=error msg="encountered an error cleaning up failed sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.577138 containerd[1498]: time="2025-01-29T16:24:14.576983489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.577463 kubelet[3011]: E0129 16:24:14.577345 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:14.577463 kubelet[3011]: E0129 16:24:14.577410 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:14.577463 kubelet[3011]: E0129 16:24:14.577439 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:14.577616 kubelet[3011]: E0129 16:24:14.577483 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x54mr" podUID="62f55e30-6d37-4cf6-83d1-f987ecffdd26" Jan 29 16:24:14.935067 systemd[1]: run-netns-cni\x2d9ea0f7e6\x2d24fe\x2d8ecc\x2d47e2\x2d78b8f659d021.mount: Deactivated successfully. Jan 29 16:24:14.935233 systemd[1]: run-netns-cni\x2d8b45f85e\x2d64f6\x2d9280\x2d0109\x2df4aa44f50339.mount: Deactivated successfully. Jan 29 16:24:14.935282 systemd[1]: run-netns-cni\x2d4d0863b0\x2d0409\x2d3f20\x2d1759\x2d8fc6d302fa9b.mount: Deactivated successfully. Jan 29 16:24:15.239232 kubelet[3011]: I0129 16:24:15.238433 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4" Jan 29 16:24:15.239403 containerd[1498]: time="2025-01-29T16:24:15.239232096Z" level=info msg="StopPodSandbox for \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\"" Jan 29 16:24:15.239736 containerd[1498]: time="2025-01-29T16:24:15.239415821Z" level=info msg="Ensure that sandbox 85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4 in task-service has been cleanup successfully" Jan 29 16:24:15.241249 containerd[1498]: time="2025-01-29T16:24:15.241197792Z" level=info msg="TearDown network for sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" successfully" Jan 29 16:24:15.241249 containerd[1498]: time="2025-01-29T16:24:15.241236713Z" level=info msg="StopPodSandbox for \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" returns successfully" Jan 29 16:24:15.243451 systemd[1]: run-netns-cni\x2db211ee04\x2d9788\x2d2cfb\x2d6cb0\x2dcb96e65da3e4.mount: Deactivated successfully. Jan 29 16:24:15.246368 containerd[1498]: time="2025-01-29T16:24:15.245772363Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\"" Jan 29 16:24:15.246368 containerd[1498]: time="2025-01-29T16:24:15.245920008Z" level=info msg="TearDown network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" successfully" Jan 29 16:24:15.246368 containerd[1498]: time="2025-01-29T16:24:15.245931328Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" returns successfully" Jan 29 16:24:15.247344 containerd[1498]: time="2025-01-29T16:24:15.247307687Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\"" Jan 29 16:24:15.247638 containerd[1498]: time="2025-01-29T16:24:15.247432211Z" level=info msg="TearDown network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" successfully" Jan 29 16:24:15.247638 containerd[1498]: time="2025-01-29T16:24:15.247452211Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" returns successfully" Jan 29 16:24:15.249764 containerd[1498]: time="2025-01-29T16:24:15.249423708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:3,}" Jan 29 16:24:15.250853 kubelet[3011]: I0129 16:24:15.250708 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3" Jan 29 16:24:15.254866 containerd[1498]: time="2025-01-29T16:24:15.251885459Z" level=info msg="StopPodSandbox for \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\"" Jan 29 16:24:15.254866 containerd[1498]: time="2025-01-29T16:24:15.252111105Z" level=info msg="Ensure that sandbox 55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3 in task-service has been cleanup successfully" Jan 29 16:24:15.254866 containerd[1498]: time="2025-01-29T16:24:15.252310191Z" level=info msg="TearDown network for sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" successfully" Jan 29 16:24:15.254866 containerd[1498]: time="2025-01-29T16:24:15.252325671Z" level=info msg="StopPodSandbox for \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" returns successfully" Jan 29 16:24:15.255084 containerd[1498]: time="2025-01-29T16:24:15.255047749Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\"" Jan 29 16:24:15.255842 containerd[1498]: time="2025-01-29T16:24:15.255167833Z" level=info msg="TearDown network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" successfully" Jan 29 16:24:15.255842 containerd[1498]: time="2025-01-29T16:24:15.255192553Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" returns successfully" Jan 29 16:24:15.256908 containerd[1498]: time="2025-01-29T16:24:15.256742238Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\"" Jan 29 16:24:15.257035 containerd[1498]: time="2025-01-29T16:24:15.256989685Z" level=info msg="TearDown network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" successfully" Jan 29 16:24:15.257035 containerd[1498]: time="2025-01-29T16:24:15.257008005Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" returns successfully" Jan 29 16:24:15.259142 containerd[1498]: time="2025-01-29T16:24:15.257884231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:3,}" Jan 29 16:24:15.259337 kubelet[3011]: I0129 16:24:15.258806 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5" Jan 29 16:24:15.258489 systemd[1]: run-netns-cni\x2de8411035\x2d71e9\x2d2539\x2d4824\x2d5e5c1507ab69.mount: Deactivated successfully. Jan 29 16:24:15.259445 containerd[1498]: time="2025-01-29T16:24:15.259382074Z" level=info msg="StopPodSandbox for \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\"" Jan 29 16:24:15.263858 containerd[1498]: time="2025-01-29T16:24:15.259572119Z" level=info msg="Ensure that sandbox e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5 in task-service has been cleanup successfully" Jan 29 16:24:15.263858 containerd[1498]: time="2025-01-29T16:24:15.259788725Z" level=info msg="TearDown network for sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" successfully" Jan 29 16:24:15.263858 containerd[1498]: time="2025-01-29T16:24:15.259804286Z" level=info msg="StopPodSandbox for \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" returns successfully" Jan 29 16:24:15.263858 containerd[1498]: time="2025-01-29T16:24:15.262030109Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\"" Jan 29 16:24:15.263858 containerd[1498]: time="2025-01-29T16:24:15.262152553Z" level=info msg="TearDown network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" successfully" Jan 29 16:24:15.263858 containerd[1498]: time="2025-01-29T16:24:15.262162873Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" returns successfully" Jan 29 16:24:15.266301 containerd[1498]: time="2025-01-29T16:24:15.265681774Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\"" Jan 29 16:24:15.266301 containerd[1498]: time="2025-01-29T16:24:15.265862979Z" level=info msg="TearDown network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" successfully" Jan 29 16:24:15.266301 containerd[1498]: time="2025-01-29T16:24:15.265875300Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" returns successfully" Jan 29 16:24:15.266147 systemd[1]: run-netns-cni\x2d9db729d7\x2df719\x2d3f7f\x2d210a\x2d2a72b2647f82.mount: Deactivated successfully. Jan 29 16:24:15.268445 containerd[1498]: time="2025-01-29T16:24:15.267775794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:3,}" Jan 29 16:24:15.269489 kubelet[3011]: I0129 16:24:15.269459 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906" Jan 29 16:24:15.277212 containerd[1498]: time="2025-01-29T16:24:15.274648791Z" level=info msg="StopPodSandbox for \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\"" Jan 29 16:24:15.277212 containerd[1498]: time="2025-01-29T16:24:15.274904639Z" level=info msg="Ensure that sandbox 1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906 in task-service has been cleanup successfully" Jan 29 16:24:15.277212 containerd[1498]: time="2025-01-29T16:24:15.275998990Z" level=info msg="TearDown network for sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" successfully" Jan 29 16:24:15.277212 containerd[1498]: time="2025-01-29T16:24:15.276031231Z" level=info msg="StopPodSandbox for \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" returns successfully" Jan 29 16:24:15.279248 systemd[1]: run-netns-cni\x2db0eb1cdd\x2d5470\x2d0266\x2d0cf7\x2d43ab727ecfad.mount: Deactivated successfully. Jan 29 16:24:15.280846 containerd[1498]: time="2025-01-29T16:24:15.280789367Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\"" Jan 29 16:24:15.282677 containerd[1498]: time="2025-01-29T16:24:15.282642100Z" level=info msg="TearDown network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" successfully" Jan 29 16:24:15.283210 containerd[1498]: time="2025-01-29T16:24:15.282719703Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" returns successfully" Jan 29 16:24:15.283806 containerd[1498]: time="2025-01-29T16:24:15.283682090Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\"" Jan 29 16:24:15.285708 kubelet[3011]: I0129 16:24:15.284562 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6" Jan 29 16:24:15.285856 containerd[1498]: time="2025-01-29T16:24:15.285256655Z" level=info msg="StopPodSandbox for \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\"" Jan 29 16:24:15.285856 containerd[1498]: time="2025-01-29T16:24:15.285461981Z" level=info msg="Ensure that sandbox c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6 in task-service has been cleanup successfully" Jan 29 16:24:15.286143 containerd[1498]: time="2025-01-29T16:24:15.286058638Z" level=info msg="TearDown network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" successfully" Jan 29 16:24:15.286227 containerd[1498]: time="2025-01-29T16:24:15.286213323Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" returns successfully" Jan 29 16:24:15.287933 containerd[1498]: time="2025-01-29T16:24:15.287665324Z" level=info msg="TearDown network for sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" successfully" Jan 29 16:24:15.287933 containerd[1498]: time="2025-01-29T16:24:15.287916812Z" level=info msg="StopPodSandbox for \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" returns successfully" Jan 29 16:24:15.290736 containerd[1498]: time="2025-01-29T16:24:15.290682091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:3,}" Jan 29 16:24:15.291038 containerd[1498]: time="2025-01-29T16:24:15.291009420Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\"" Jan 29 16:24:15.291150 containerd[1498]: time="2025-01-29T16:24:15.291117143Z" level=info msg="TearDown network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" successfully" Jan 29 16:24:15.291150 containerd[1498]: time="2025-01-29T16:24:15.291130424Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" returns successfully" Jan 29 16:24:15.293402 containerd[1498]: time="2025-01-29T16:24:15.293327727Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\"" Jan 29 16:24:15.295325 containerd[1498]: time="2025-01-29T16:24:15.294433119Z" level=info msg="TearDown network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" successfully" Jan 29 16:24:15.295706 containerd[1498]: time="2025-01-29T16:24:15.295536950Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" returns successfully" Jan 29 16:24:15.298837 containerd[1498]: time="2025-01-29T16:24:15.298256668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:3,}" Jan 29 16:24:15.300477 kubelet[3011]: I0129 16:24:15.299951 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e" Jan 29 16:24:15.301230 containerd[1498]: time="2025-01-29T16:24:15.301194432Z" level=info msg="StopPodSandbox for \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\"" Jan 29 16:24:15.302433 containerd[1498]: time="2025-01-29T16:24:15.302397387Z" level=info msg="Ensure that sandbox ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e in task-service has been cleanup successfully" Jan 29 16:24:15.303979 containerd[1498]: time="2025-01-29T16:24:15.303839748Z" level=info msg="TearDown network for sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" successfully" Jan 29 16:24:15.303979 containerd[1498]: time="2025-01-29T16:24:15.303870669Z" level=info msg="StopPodSandbox for \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" returns successfully" Jan 29 16:24:15.305489 containerd[1498]: time="2025-01-29T16:24:15.305417033Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\"" Jan 29 16:24:15.305885 containerd[1498]: time="2025-01-29T16:24:15.305729242Z" level=info msg="TearDown network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" successfully" Jan 29 16:24:15.305885 containerd[1498]: time="2025-01-29T16:24:15.305749523Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" returns successfully" Jan 29 16:24:15.306567 containerd[1498]: time="2025-01-29T16:24:15.306368621Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\"" Jan 29 16:24:15.306567 containerd[1498]: time="2025-01-29T16:24:15.306483664Z" level=info msg="TearDown network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" successfully" Jan 29 16:24:15.306567 containerd[1498]: time="2025-01-29T16:24:15.306493584Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" returns successfully" Jan 29 16:24:15.309149 containerd[1498]: time="2025-01-29T16:24:15.309108579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:3,}" Jan 29 16:24:15.525442 containerd[1498]: time="2025-01-29T16:24:15.525272217Z" level=error msg="Failed to destroy network for sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.531032 containerd[1498]: time="2025-01-29T16:24:15.530973701Z" level=error msg="encountered an error cleaning up failed sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.532733 containerd[1498]: time="2025-01-29T16:24:15.532516865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.533123 kubelet[3011]: E0129 16:24:15.533001 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.533123 kubelet[3011]: E0129 16:24:15.533070 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:15.533123 kubelet[3011]: E0129 16:24:15.533096 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:15.533790 kubelet[3011]: E0129 16:24:15.533136 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" podUID="c7972d1b-4410-4bcb-97d1-bf72f6d2582a" Jan 29 16:24:15.566393 containerd[1498]: time="2025-01-29T16:24:15.566152549Z" level=error msg="Failed to destroy network for sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.568473 containerd[1498]: time="2025-01-29T16:24:15.567880399Z" level=error msg="encountered an error cleaning up failed sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.570871 containerd[1498]: time="2025-01-29T16:24:15.568921229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.571268 kubelet[3011]: E0129 16:24:15.571016 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.571268 kubelet[3011]: E0129 16:24:15.571080 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:15.571268 kubelet[3011]: E0129 16:24:15.571107 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:15.571397 kubelet[3011]: E0129 16:24:15.571146 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x54mr" podUID="62f55e30-6d37-4cf6-83d1-f987ecffdd26" Jan 29 16:24:15.602344 containerd[1498]: time="2025-01-29T16:24:15.601764490Z" level=error msg="Failed to destroy network for sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.605072 containerd[1498]: time="2025-01-29T16:24:15.604806297Z" level=error msg="encountered an error cleaning up failed sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.605584 containerd[1498]: time="2025-01-29T16:24:15.605302712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.606105 kubelet[3011]: E0129 16:24:15.606052 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.606197 kubelet[3011]: E0129 16:24:15.606122 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:15.606197 kubelet[3011]: E0129 16:24:15.606142 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:15.606253 kubelet[3011]: E0129 16:24:15.606192 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4b7lh" podUID="bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10" Jan 29 16:24:15.617288 containerd[1498]: time="2025-01-29T16:24:15.617021008Z" level=error msg="Failed to destroy network for sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.618995 containerd[1498]: time="2025-01-29T16:24:15.618924582Z" level=error msg="encountered an error cleaning up failed sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.619956 containerd[1498]: time="2025-01-29T16:24:15.619748246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.620587 kubelet[3011]: E0129 16:24:15.620514 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.620587 kubelet[3011]: E0129 16:24:15.620583 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:15.620855 kubelet[3011]: E0129 16:24:15.620604 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:15.620855 kubelet[3011]: E0129 16:24:15.620664 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" podUID="f934c577-e456-48e1-b8cb-5d9754694bac" Jan 29 16:24:15.639634 containerd[1498]: time="2025-01-29T16:24:15.639474611Z" level=error msg="Failed to destroy network for sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.641608 containerd[1498]: time="2025-01-29T16:24:15.641281863Z" level=error msg="encountered an error cleaning up failed sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.642240 containerd[1498]: time="2025-01-29T16:24:15.642099407Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.643108 kubelet[3011]: E0129 16:24:15.642979 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.643108 kubelet[3011]: E0129 16:24:15.643062 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:15.643108 kubelet[3011]: E0129 16:24:15.643082 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:15.643571 kubelet[3011]: E0129 16:24:15.643366 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" podUID="7c160f15-f2fa-4241-9a8e-11ff1238cb71" Jan 29 16:24:15.672706 containerd[1498]: time="2025-01-29T16:24:15.672413676Z" level=error msg="Failed to destroy network for sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.673532 containerd[1498]: time="2025-01-29T16:24:15.673336622Z" level=error msg="encountered an error cleaning up failed sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.673532 containerd[1498]: time="2025-01-29T16:24:15.673425145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.673789 kubelet[3011]: E0129 16:24:15.673746 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:15.673861 kubelet[3011]: E0129 16:24:15.673841 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:15.673890 kubelet[3011]: E0129 16:24:15.673866 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:15.674012 kubelet[3011]: E0129 16:24:15.673914 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z22bk" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" Jan 29 16:24:15.935420 systemd[1]: run-netns-cni\x2df372a9e9\x2d3c2c\x2dc2a6\x2d750f\x2d13ed8273de5d.mount: Deactivated successfully. Jan 29 16:24:15.936661 systemd[1]: run-netns-cni\x2dee6a253d\x2deba4\x2d037e\x2d22e2\x2dfaf9ffe46edc.mount: Deactivated successfully. Jan 29 16:24:16.305773 kubelet[3011]: I0129 16:24:16.305735 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be" Jan 29 16:24:16.310894 containerd[1498]: time="2025-01-29T16:24:16.308369141Z" level=info msg="StopPodSandbox for \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\"" Jan 29 16:24:16.310894 containerd[1498]: time="2025-01-29T16:24:16.308567707Z" level=info msg="Ensure that sandbox fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be in task-service has been cleanup successfully" Jan 29 16:24:16.313336 systemd[1]: run-netns-cni\x2d200b3e33\x2d4553\x2d50d3\x2d49ae\x2d3cc48bb7034b.mount: Deactivated successfully. Jan 29 16:24:16.317140 containerd[1498]: time="2025-01-29T16:24:16.314263231Z" level=info msg="TearDown network for sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\" successfully" Jan 29 16:24:16.318899 containerd[1498]: time="2025-01-29T16:24:16.316808225Z" level=info msg="StopPodSandbox for \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\" returns successfully" Jan 29 16:24:16.322511 containerd[1498]: time="2025-01-29T16:24:16.322468148Z" level=info msg="StopPodSandbox for \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\"" Jan 29 16:24:16.322720 containerd[1498]: time="2025-01-29T16:24:16.322593352Z" level=info msg="TearDown network for sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" successfully" Jan 29 16:24:16.322720 containerd[1498]: time="2025-01-29T16:24:16.322604832Z" level=info msg="StopPodSandbox for \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" returns successfully" Jan 29 16:24:16.324602 containerd[1498]: time="2025-01-29T16:24:16.324393564Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\"" Jan 29 16:24:16.324602 containerd[1498]: time="2025-01-29T16:24:16.324532848Z" level=info msg="TearDown network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" successfully" Jan 29 16:24:16.324602 containerd[1498]: time="2025-01-29T16:24:16.324546329Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" returns successfully" Jan 29 16:24:16.326306 containerd[1498]: time="2025-01-29T16:24:16.325682441Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\"" Jan 29 16:24:16.327920 kubelet[3011]: I0129 16:24:16.327128 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea" Jan 29 16:24:16.329281 containerd[1498]: time="2025-01-29T16:24:16.328289717Z" level=info msg="StopPodSandbox for \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\"" Jan 29 16:24:16.329470 containerd[1498]: time="2025-01-29T16:24:16.329446750Z" level=info msg="Ensure that sandbox 057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea in task-service has been cleanup successfully" Jan 29 16:24:16.329574 containerd[1498]: time="2025-01-29T16:24:16.329257545Z" level=info msg="TearDown network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" successfully" Jan 29 16:24:16.329685 containerd[1498]: time="2025-01-29T16:24:16.329666797Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" returns successfully" Jan 29 16:24:16.330787 containerd[1498]: time="2025-01-29T16:24:16.330710387Z" level=info msg="TearDown network for sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\" successfully" Jan 29 16:24:16.330787 containerd[1498]: time="2025-01-29T16:24:16.330771388Z" level=info msg="StopPodSandbox for \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\" returns successfully" Jan 29 16:24:16.333704 containerd[1498]: time="2025-01-29T16:24:16.331981983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:4,}" Jan 29 16:24:16.334230 systemd[1]: run-netns-cni\x2d9dc4c1fd\x2d545f\x2d31e3\x2db412\x2d59b50dc7aced.mount: Deactivated successfully. Jan 29 16:24:16.337155 containerd[1498]: time="2025-01-29T16:24:16.337110652Z" level=info msg="StopPodSandbox for \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\"" Jan 29 16:24:16.337530 containerd[1498]: time="2025-01-29T16:24:16.337392820Z" level=info msg="TearDown network for sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" successfully" Jan 29 16:24:16.337530 containerd[1498]: time="2025-01-29T16:24:16.337408620Z" level=info msg="StopPodSandbox for \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" returns successfully" Jan 29 16:24:16.339478 containerd[1498]: time="2025-01-29T16:24:16.339419758Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\"" Jan 29 16:24:16.340023 containerd[1498]: time="2025-01-29T16:24:16.339750048Z" level=info msg="TearDown network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" successfully" Jan 29 16:24:16.340023 containerd[1498]: time="2025-01-29T16:24:16.339768849Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" returns successfully" Jan 29 16:24:16.342161 containerd[1498]: time="2025-01-29T16:24:16.342110796Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\"" Jan 29 16:24:16.342288 containerd[1498]: time="2025-01-29T16:24:16.342263721Z" level=info msg="TearDown network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" successfully" Jan 29 16:24:16.342288 containerd[1498]: time="2025-01-29T16:24:16.342275481Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" returns successfully" Jan 29 16:24:16.343592 containerd[1498]: time="2025-01-29T16:24:16.343549398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:4,}" Jan 29 16:24:16.347282 kubelet[3011]: I0129 16:24:16.347252 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128" Jan 29 16:24:16.348611 containerd[1498]: time="2025-01-29T16:24:16.348553942Z" level=info msg="StopPodSandbox for \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\"" Jan 29 16:24:16.350548 containerd[1498]: time="2025-01-29T16:24:16.350511439Z" level=info msg="Ensure that sandbox 3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128 in task-service has been cleanup successfully" Jan 29 16:24:16.355068 containerd[1498]: time="2025-01-29T16:24:16.353386642Z" level=info msg="TearDown network for sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\" successfully" Jan 29 16:24:16.355068 containerd[1498]: time="2025-01-29T16:24:16.353432843Z" level=info msg="StopPodSandbox for \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\" returns successfully" Jan 29 16:24:16.356349 systemd[1]: run-netns-cni\x2d9513e966\x2d670f\x2dbda7\x2dbbbe\x2dac27612b69f9.mount: Deactivated successfully. Jan 29 16:24:16.359615 containerd[1498]: time="2025-01-29T16:24:16.359463458Z" level=info msg="StopPodSandbox for \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\"" Jan 29 16:24:16.360911 kubelet[3011]: I0129 16:24:16.360803 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26" Jan 29 16:24:16.361245 containerd[1498]: time="2025-01-29T16:24:16.360458207Z" level=info msg="TearDown network for sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" successfully" Jan 29 16:24:16.361245 containerd[1498]: time="2025-01-29T16:24:16.361171747Z" level=info msg="StopPodSandbox for \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" returns successfully" Jan 29 16:24:16.362467 containerd[1498]: time="2025-01-29T16:24:16.361936289Z" level=info msg="StopPodSandbox for \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\"" Jan 29 16:24:16.362467 containerd[1498]: time="2025-01-29T16:24:16.362126175Z" level=info msg="Ensure that sandbox 2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26 in task-service has been cleanup successfully" Jan 29 16:24:16.364898 containerd[1498]: time="2025-01-29T16:24:16.364358599Z" level=info msg="TearDown network for sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\" successfully" Jan 29 16:24:16.364898 containerd[1498]: time="2025-01-29T16:24:16.364403721Z" level=info msg="StopPodSandbox for \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\" returns successfully" Jan 29 16:24:16.365556 systemd[1]: run-netns-cni\x2d17c9fa6e\x2dc409\x2d9b2c\x2de6ff\x2d6914bce8b1cf.mount: Deactivated successfully. Jan 29 16:24:16.367702 containerd[1498]: time="2025-01-29T16:24:16.367557692Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\"" Jan 29 16:24:16.368050 containerd[1498]: time="2025-01-29T16:24:16.367737417Z" level=info msg="TearDown network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" successfully" Jan 29 16:24:16.368050 containerd[1498]: time="2025-01-29T16:24:16.367750817Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" returns successfully" Jan 29 16:24:16.371132 containerd[1498]: time="2025-01-29T16:24:16.369263661Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\"" Jan 29 16:24:16.371132 containerd[1498]: time="2025-01-29T16:24:16.369399345Z" level=info msg="TearDown network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" successfully" Jan 29 16:24:16.371132 containerd[1498]: time="2025-01-29T16:24:16.369413065Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" returns successfully" Jan 29 16:24:16.371132 containerd[1498]: time="2025-01-29T16:24:16.369575750Z" level=info msg="StopPodSandbox for \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\"" Jan 29 16:24:16.371132 containerd[1498]: time="2025-01-29T16:24:16.369662433Z" level=info msg="TearDown network for sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" successfully" Jan 29 16:24:16.371132 containerd[1498]: time="2025-01-29T16:24:16.369674433Z" level=info msg="StopPodSandbox for \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" returns successfully" Jan 29 16:24:16.374128 containerd[1498]: time="2025-01-29T16:24:16.373421901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:4,}" Jan 29 16:24:16.374128 containerd[1498]: time="2025-01-29T16:24:16.373744111Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\"" Jan 29 16:24:16.374128 containerd[1498]: time="2025-01-29T16:24:16.373882355Z" level=info msg="TearDown network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" successfully" Jan 29 16:24:16.375345 containerd[1498]: time="2025-01-29T16:24:16.375276275Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" returns successfully" Jan 29 16:24:16.377397 containerd[1498]: time="2025-01-29T16:24:16.377321614Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\"" Jan 29 16:24:16.377842 containerd[1498]: time="2025-01-29T16:24:16.377428537Z" level=info msg="TearDown network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" successfully" Jan 29 16:24:16.377842 containerd[1498]: time="2025-01-29T16:24:16.377438777Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" returns successfully" Jan 29 16:24:16.380470 containerd[1498]: time="2025-01-29T16:24:16.380362662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:4,}" Jan 29 16:24:16.386678 kubelet[3011]: I0129 16:24:16.386065 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3" Jan 29 16:24:16.388298 containerd[1498]: time="2025-01-29T16:24:16.388208889Z" level=info msg="StopPodSandbox for \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\"" Jan 29 16:24:16.388516 containerd[1498]: time="2025-01-29T16:24:16.388397174Z" level=info msg="Ensure that sandbox 37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3 in task-service has been cleanup successfully" Jan 29 16:24:16.388599 containerd[1498]: time="2025-01-29T16:24:16.388579659Z" level=info msg="TearDown network for sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\" successfully" Jan 29 16:24:16.388745 containerd[1498]: time="2025-01-29T16:24:16.388597420Z" level=info msg="StopPodSandbox for \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\" returns successfully" Jan 29 16:24:16.390338 containerd[1498]: time="2025-01-29T16:24:16.390169785Z" level=info msg="StopPodSandbox for \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\"" Jan 29 16:24:16.390338 containerd[1498]: time="2025-01-29T16:24:16.390322870Z" level=info msg="TearDown network for sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" successfully" Jan 29 16:24:16.390338 containerd[1498]: time="2025-01-29T16:24:16.390333710Z" level=info msg="StopPodSandbox for \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" returns successfully" Jan 29 16:24:16.392745 containerd[1498]: time="2025-01-29T16:24:16.392222885Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\"" Jan 29 16:24:16.392745 containerd[1498]: time="2025-01-29T16:24:16.392347248Z" level=info msg="TearDown network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" successfully" Jan 29 16:24:16.392745 containerd[1498]: time="2025-01-29T16:24:16.392361969Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" returns successfully" Jan 29 16:24:16.395135 containerd[1498]: time="2025-01-29T16:24:16.394850121Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\"" Jan 29 16:24:16.396979 containerd[1498]: time="2025-01-29T16:24:16.396362764Z" level=info msg="TearDown network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" successfully" Jan 29 16:24:16.396979 containerd[1498]: time="2025-01-29T16:24:16.396392445Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" returns successfully" Jan 29 16:24:16.399159 kubelet[3011]: I0129 16:24:16.399023 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d" Jan 29 16:24:16.402527 containerd[1498]: time="2025-01-29T16:24:16.402299296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:4,}" Jan 29 16:24:16.402527 containerd[1498]: time="2025-01-29T16:24:16.402467981Z" level=info msg="StopPodSandbox for \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\"" Jan 29 16:24:16.402734 containerd[1498]: time="2025-01-29T16:24:16.402677267Z" level=info msg="Ensure that sandbox 0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d in task-service has been cleanup successfully" Jan 29 16:24:16.405302 containerd[1498]: time="2025-01-29T16:24:16.404928252Z" level=info msg="TearDown network for sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\" successfully" Jan 29 16:24:16.405302 containerd[1498]: time="2025-01-29T16:24:16.405010774Z" level=info msg="StopPodSandbox for \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\" returns successfully" Jan 29 16:24:16.406716 containerd[1498]: time="2025-01-29T16:24:16.406561459Z" level=info msg="StopPodSandbox for \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\"" Jan 29 16:24:16.406899 containerd[1498]: time="2025-01-29T16:24:16.406727864Z" level=info msg="TearDown network for sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" successfully" Jan 29 16:24:16.406899 containerd[1498]: time="2025-01-29T16:24:16.406743624Z" level=info msg="StopPodSandbox for \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" returns successfully" Jan 29 16:24:16.409096 containerd[1498]: time="2025-01-29T16:24:16.408481715Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\"" Jan 29 16:24:16.409096 containerd[1498]: time="2025-01-29T16:24:16.408618719Z" level=info msg="TearDown network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" successfully" Jan 29 16:24:16.409096 containerd[1498]: time="2025-01-29T16:24:16.408690601Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" returns successfully" Jan 29 16:24:16.411481 containerd[1498]: time="2025-01-29T16:24:16.411288516Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\"" Jan 29 16:24:16.413085 containerd[1498]: time="2025-01-29T16:24:16.412447669Z" level=info msg="TearDown network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" successfully" Jan 29 16:24:16.414538 containerd[1498]: time="2025-01-29T16:24:16.413845310Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" returns successfully" Jan 29 16:24:16.415615 containerd[1498]: time="2025-01-29T16:24:16.415297392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:4,}" Jan 29 16:24:16.640092 containerd[1498]: time="2025-01-29T16:24:16.638925415Z" level=error msg="Failed to destroy network for sandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.649641 containerd[1498]: time="2025-01-29T16:24:16.649147591Z" level=error msg="encountered an error cleaning up failed sandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.651055 containerd[1498]: time="2025-01-29T16:24:16.650786078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.652617 kubelet[3011]: E0129 16:24:16.652567 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.654374 kubelet[3011]: E0129 16:24:16.653760 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:16.654374 kubelet[3011]: E0129 16:24:16.653842 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:16.654374 kubelet[3011]: E0129 16:24:16.653915 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" podUID="f934c577-e456-48e1-b8cb-5d9754694bac" Jan 29 16:24:16.673353 containerd[1498]: time="2025-01-29T16:24:16.673121923Z" level=error msg="Failed to destroy network for sandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.675041 containerd[1498]: time="2025-01-29T16:24:16.674753011Z" level=error msg="encountered an error cleaning up failed sandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.675621 containerd[1498]: time="2025-01-29T16:24:16.675500872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.676605 kubelet[3011]: E0129 16:24:16.676047 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.676605 kubelet[3011]: E0129 16:24:16.676122 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:16.676605 kubelet[3011]: E0129 16:24:16.676176 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:16.677187 kubelet[3011]: E0129 16:24:16.677048 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" podUID="c7972d1b-4410-4bcb-97d1-bf72f6d2582a" Jan 29 16:24:16.687375 containerd[1498]: time="2025-01-29T16:24:16.687149009Z" level=error msg="Failed to destroy network for sandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.688731 containerd[1498]: time="2025-01-29T16:24:16.688565450Z" level=error msg="encountered an error cleaning up failed sandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.688996 containerd[1498]: time="2025-01-29T16:24:16.688785296Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.691684 kubelet[3011]: E0129 16:24:16.691421 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.692526 kubelet[3011]: E0129 16:24:16.691611 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:16.692526 kubelet[3011]: E0129 16:24:16.691847 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:16.692526 kubelet[3011]: E0129 16:24:16.691922 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z22bk" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" Jan 29 16:24:16.727057 containerd[1498]: time="2025-01-29T16:24:16.727007241Z" level=error msg="Failed to destroy network for sandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.728400 containerd[1498]: time="2025-01-29T16:24:16.728296438Z" level=error msg="encountered an error cleaning up failed sandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.728979 containerd[1498]: time="2025-01-29T16:24:16.728749931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.729855 kubelet[3011]: E0129 16:24:16.729643 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.729855 kubelet[3011]: E0129 16:24:16.729713 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:16.729855 kubelet[3011]: E0129 16:24:16.729743 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:16.729978 kubelet[3011]: E0129 16:24:16.729784 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4b7lh" podUID="bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10" Jan 29 16:24:16.777834 containerd[1498]: time="2025-01-29T16:24:16.777604063Z" level=error msg="Failed to destroy network for sandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.779873 containerd[1498]: time="2025-01-29T16:24:16.779345754Z" level=error msg="encountered an error cleaning up failed sandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.779873 containerd[1498]: time="2025-01-29T16:24:16.779440196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.780058 kubelet[3011]: E0129 16:24:16.779711 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.780058 kubelet[3011]: E0129 16:24:16.779779 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:16.780058 kubelet[3011]: E0129 16:24:16.779801 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:16.781147 kubelet[3011]: E0129 16:24:16.780723 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" podUID="7c160f15-f2fa-4241-9a8e-11ff1238cb71" Jan 29 16:24:16.783869 containerd[1498]: time="2025-01-29T16:24:16.783406991Z" level=error msg="Failed to destroy network for sandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.785094 containerd[1498]: time="2025-01-29T16:24:16.784730229Z" level=error msg="encountered an error cleaning up failed sandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.785094 containerd[1498]: time="2025-01-29T16:24:16.784868273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.785839 kubelet[3011]: E0129 16:24:16.785760 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:16.786924 kubelet[3011]: E0129 16:24:16.786012 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:16.786924 kubelet[3011]: E0129 16:24:16.786037 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:16.787281 kubelet[3011]: E0129 16:24:16.786465 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x54mr" podUID="62f55e30-6d37-4cf6-83d1-f987ecffdd26" Jan 29 16:24:16.935938 systemd[1]: run-netns-cni\x2dee34bb9e\x2ded95\x2db3c5\x2da512\x2de705e092df0c.mount: Deactivated successfully. Jan 29 16:24:16.936065 systemd[1]: run-netns-cni\x2d948f0080\x2d505e\x2d777a\x2dd2c8\x2d1d1d172af562.mount: Deactivated successfully. Jan 29 16:24:17.408077 kubelet[3011]: I0129 16:24:17.408037 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea" Jan 29 16:24:17.412799 containerd[1498]: time="2025-01-29T16:24:17.411385314Z" level=info msg="StopPodSandbox for \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\"" Jan 29 16:24:17.412799 containerd[1498]: time="2025-01-29T16:24:17.411565319Z" level=info msg="Ensure that sandbox 0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea in task-service has been cleanup successfully" Jan 29 16:24:17.418091 containerd[1498]: time="2025-01-29T16:24:17.415464432Z" level=info msg="TearDown network for sandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\" successfully" Jan 29 16:24:17.418091 containerd[1498]: time="2025-01-29T16:24:17.415499794Z" level=info msg="StopPodSandbox for \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\" returns successfully" Jan 29 16:24:17.417129 systemd[1]: run-netns-cni\x2dd91c6877\x2d2d31\x2d6f6f\x2d5e76\x2d617941d6c6a4.mount: Deactivated successfully. Jan 29 16:24:17.419592 containerd[1498]: time="2025-01-29T16:24:17.419544671Z" level=info msg="StopPodSandbox for \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\"" Jan 29 16:24:17.419722 containerd[1498]: time="2025-01-29T16:24:17.419687915Z" level=info msg="TearDown network for sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\" successfully" Jan 29 16:24:17.419722 containerd[1498]: time="2025-01-29T16:24:17.419701556Z" level=info msg="StopPodSandbox for \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\" returns successfully" Jan 29 16:24:17.420508 containerd[1498]: time="2025-01-29T16:24:17.420468018Z" level=info msg="StopPodSandbox for \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\"" Jan 29 16:24:17.420713 containerd[1498]: time="2025-01-29T16:24:17.420576621Z" level=info msg="TearDown network for sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" successfully" Jan 29 16:24:17.420713 containerd[1498]: time="2025-01-29T16:24:17.420623783Z" level=info msg="StopPodSandbox for \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" returns successfully" Jan 29 16:24:17.421677 kubelet[3011]: I0129 16:24:17.421020 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775" Jan 29 16:24:17.423271 containerd[1498]: time="2025-01-29T16:24:17.422455116Z" level=info msg="StopPodSandbox for \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\"" Jan 29 16:24:17.423271 containerd[1498]: time="2025-01-29T16:24:17.422908969Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\"" Jan 29 16:24:17.423271 containerd[1498]: time="2025-01-29T16:24:17.423097015Z" level=info msg="TearDown network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" successfully" Jan 29 16:24:17.423271 containerd[1498]: time="2025-01-29T16:24:17.423213498Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" returns successfully" Jan 29 16:24:17.423271 containerd[1498]: time="2025-01-29T16:24:17.423140776Z" level=info msg="Ensure that sandbox 8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775 in task-service has been cleanup successfully" Jan 29 16:24:17.424123 containerd[1498]: time="2025-01-29T16:24:17.424080203Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\"" Jan 29 16:24:17.424299 containerd[1498]: time="2025-01-29T16:24:17.424197407Z" level=info msg="TearDown network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" successfully" Jan 29 16:24:17.424299 containerd[1498]: time="2025-01-29T16:24:17.424210807Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" returns successfully" Jan 29 16:24:17.428767 systemd[1]: run-netns-cni\x2d6b5bbb7f\x2d40df\x2d3f5d\x2d4d37\x2d31aac6b8549f.mount: Deactivated successfully. Jan 29 16:24:17.429713 containerd[1498]: time="2025-01-29T16:24:17.428917224Z" level=info msg="TearDown network for sandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\" successfully" Jan 29 16:24:17.429713 containerd[1498]: time="2025-01-29T16:24:17.429219913Z" level=info msg="StopPodSandbox for \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\" returns successfully" Jan 29 16:24:17.430744 containerd[1498]: time="2025-01-29T16:24:17.430332666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:5,}" Jan 29 16:24:17.432862 containerd[1498]: time="2025-01-29T16:24:17.432454647Z" level=info msg="StopPodSandbox for \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\"" Jan 29 16:24:17.432862 containerd[1498]: time="2025-01-29T16:24:17.432571531Z" level=info msg="TearDown network for sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\" successfully" Jan 29 16:24:17.432862 containerd[1498]: time="2025-01-29T16:24:17.432581811Z" level=info msg="StopPodSandbox for \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\" returns successfully" Jan 29 16:24:17.433392 containerd[1498]: time="2025-01-29T16:24:17.433357354Z" level=info msg="StopPodSandbox for \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\"" Jan 29 16:24:17.433459 containerd[1498]: time="2025-01-29T16:24:17.433450276Z" level=info msg="TearDown network for sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" successfully" Jan 29 16:24:17.433484 containerd[1498]: time="2025-01-29T16:24:17.433460077Z" level=info msg="StopPodSandbox for \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" returns successfully" Jan 29 16:24:17.434874 containerd[1498]: time="2025-01-29T16:24:17.434793195Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\"" Jan 29 16:24:17.434970 containerd[1498]: time="2025-01-29T16:24:17.434927999Z" level=info msg="TearDown network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" successfully" Jan 29 16:24:17.434970 containerd[1498]: time="2025-01-29T16:24:17.434939720Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" returns successfully" Jan 29 16:24:17.435599 containerd[1498]: time="2025-01-29T16:24:17.435559418Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\"" Jan 29 16:24:17.436711 containerd[1498]: time="2025-01-29T16:24:17.436489165Z" level=info msg="TearDown network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" successfully" Jan 29 16:24:17.436711 containerd[1498]: time="2025-01-29T16:24:17.436546287Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" returns successfully" Jan 29 16:24:17.437372 kubelet[3011]: I0129 16:24:17.437337 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6" Jan 29 16:24:17.439236 containerd[1498]: time="2025-01-29T16:24:17.438749791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:5,}" Jan 29 16:24:17.440569 containerd[1498]: time="2025-01-29T16:24:17.439219044Z" level=info msg="StopPodSandbox for \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\"" Jan 29 16:24:17.441350 containerd[1498]: time="2025-01-29T16:24:17.441284945Z" level=info msg="Ensure that sandbox 1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6 in task-service has been cleanup successfully" Jan 29 16:24:17.444293 systemd[1]: run-netns-cni\x2d4d923d84\x2d433d\x2dea10\x2d73d3\x2d6b3b16b6b4fc.mount: Deactivated successfully. Jan 29 16:24:17.448039 containerd[1498]: time="2025-01-29T16:24:17.444973052Z" level=info msg="TearDown network for sandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\" successfully" Jan 29 16:24:17.448039 containerd[1498]: time="2025-01-29T16:24:17.445008253Z" level=info msg="StopPodSandbox for \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\" returns successfully" Jan 29 16:24:17.448039 containerd[1498]: time="2025-01-29T16:24:17.445934960Z" level=info msg="StopPodSandbox for \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\"" Jan 29 16:24:17.448039 containerd[1498]: time="2025-01-29T16:24:17.446580059Z" level=info msg="TearDown network for sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\" successfully" Jan 29 16:24:17.448039 containerd[1498]: time="2025-01-29T16:24:17.446730263Z" level=info msg="StopPodSandbox for \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\" returns successfully" Jan 29 16:24:17.448039 containerd[1498]: time="2025-01-29T16:24:17.447805214Z" level=info msg="StopPodSandbox for \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\"" Jan 29 16:24:17.448220 containerd[1498]: time="2025-01-29T16:24:17.448111943Z" level=info msg="TearDown network for sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" successfully" Jan 29 16:24:17.448220 containerd[1498]: time="2025-01-29T16:24:17.448127624Z" level=info msg="StopPodSandbox for \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" returns successfully" Jan 29 16:24:17.450284 containerd[1498]: time="2025-01-29T16:24:17.448886366Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\"" Jan 29 16:24:17.450284 containerd[1498]: time="2025-01-29T16:24:17.449036770Z" level=info msg="TearDown network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" successfully" Jan 29 16:24:17.450284 containerd[1498]: time="2025-01-29T16:24:17.449048451Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" returns successfully" Jan 29 16:24:17.450284 containerd[1498]: time="2025-01-29T16:24:17.449624947Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\"" Jan 29 16:24:17.450284 containerd[1498]: time="2025-01-29T16:24:17.449789912Z" level=info msg="TearDown network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" successfully" Jan 29 16:24:17.450284 containerd[1498]: time="2025-01-29T16:24:17.449802073Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" returns successfully" Jan 29 16:24:17.452417 kubelet[3011]: I0129 16:24:17.452055 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae" Jan 29 16:24:17.452566 containerd[1498]: time="2025-01-29T16:24:17.452122780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:5,}" Jan 29 16:24:17.455473 containerd[1498]: time="2025-01-29T16:24:17.453514661Z" level=info msg="StopPodSandbox for \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\"" Jan 29 16:24:17.455473 containerd[1498]: time="2025-01-29T16:24:17.453733147Z" level=info msg="Ensure that sandbox af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae in task-service has been cleanup successfully" Jan 29 16:24:17.455473 containerd[1498]: time="2025-01-29T16:24:17.453977074Z" level=info msg="TearDown network for sandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\" successfully" Jan 29 16:24:17.455473 containerd[1498]: time="2025-01-29T16:24:17.453994595Z" level=info msg="StopPodSandbox for \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\" returns successfully" Jan 29 16:24:17.456610 systemd[1]: run-netns-cni\x2d2c344171\x2d48ee\x2d792b\x2da8b8\x2dc72830ea19fa.mount: Deactivated successfully. Jan 29 16:24:17.458422 containerd[1498]: time="2025-01-29T16:24:17.457570819Z" level=info msg="StopPodSandbox for \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\"" Jan 29 16:24:17.458422 containerd[1498]: time="2025-01-29T16:24:17.457861507Z" level=info msg="TearDown network for sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\" successfully" Jan 29 16:24:17.458422 containerd[1498]: time="2025-01-29T16:24:17.457884788Z" level=info msg="StopPodSandbox for \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\" returns successfully" Jan 29 16:24:17.460791 containerd[1498]: time="2025-01-29T16:24:17.460051451Z" level=info msg="StopPodSandbox for \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\"" Jan 29 16:24:17.460791 containerd[1498]: time="2025-01-29T16:24:17.460188695Z" level=info msg="TearDown network for sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" successfully" Jan 29 16:24:17.460791 containerd[1498]: time="2025-01-29T16:24:17.460209616Z" level=info msg="StopPodSandbox for \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" returns successfully" Jan 29 16:24:17.461106 containerd[1498]: time="2025-01-29T16:24:17.461050760Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\"" Jan 29 16:24:17.461188 containerd[1498]: time="2025-01-29T16:24:17.461170364Z" level=info msg="TearDown network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" successfully" Jan 29 16:24:17.461188 containerd[1498]: time="2025-01-29T16:24:17.461183604Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" returns successfully" Jan 29 16:24:17.461888 containerd[1498]: time="2025-01-29T16:24:17.461840423Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\"" Jan 29 16:24:17.462067 containerd[1498]: time="2025-01-29T16:24:17.462034589Z" level=info msg="TearDown network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" successfully" Jan 29 16:24:17.462067 containerd[1498]: time="2025-01-29T16:24:17.462047629Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" returns successfully" Jan 29 16:24:17.463414 containerd[1498]: time="2025-01-29T16:24:17.463339107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:5,}" Jan 29 16:24:17.465989 kubelet[3011]: I0129 16:24:17.465713 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513" Jan 29 16:24:17.469129 containerd[1498]: time="2025-01-29T16:24:17.467982562Z" level=info msg="StopPodSandbox for \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\"" Jan 29 16:24:17.469129 containerd[1498]: time="2025-01-29T16:24:17.468177888Z" level=info msg="Ensure that sandbox 20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513 in task-service has been cleanup successfully" Jan 29 16:24:17.469129 containerd[1498]: time="2025-01-29T16:24:17.469405044Z" level=info msg="TearDown network for sandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\" successfully" Jan 29 16:24:17.469129 containerd[1498]: time="2025-01-29T16:24:17.469441405Z" level=info msg="StopPodSandbox for \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\" returns successfully" Jan 29 16:24:17.472177 containerd[1498]: time="2025-01-29T16:24:17.471843515Z" level=info msg="StopPodSandbox for \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\"" Jan 29 16:24:17.472177 containerd[1498]: time="2025-01-29T16:24:17.472058361Z" level=info msg="TearDown network for sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\" successfully" Jan 29 16:24:17.472177 containerd[1498]: time="2025-01-29T16:24:17.472070361Z" level=info msg="StopPodSandbox for \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\" returns successfully" Jan 29 16:24:17.474252 containerd[1498]: time="2025-01-29T16:24:17.473563485Z" level=info msg="StopPodSandbox for \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\"" Jan 29 16:24:17.474252 containerd[1498]: time="2025-01-29T16:24:17.473867054Z" level=info msg="TearDown network for sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" successfully" Jan 29 16:24:17.474252 containerd[1498]: time="2025-01-29T16:24:17.473885534Z" level=info msg="StopPodSandbox for \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" returns successfully" Jan 29 16:24:17.475856 containerd[1498]: time="2025-01-29T16:24:17.475389858Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\"" Jan 29 16:24:17.475856 containerd[1498]: time="2025-01-29T16:24:17.475506181Z" level=info msg="TearDown network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" successfully" Jan 29 16:24:17.475856 containerd[1498]: time="2025-01-29T16:24:17.475516942Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" returns successfully" Jan 29 16:24:17.476261 containerd[1498]: time="2025-01-29T16:24:17.476226682Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\"" Jan 29 16:24:17.476348 containerd[1498]: time="2025-01-29T16:24:17.476328285Z" level=info msg="TearDown network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" successfully" Jan 29 16:24:17.476411 containerd[1498]: time="2025-01-29T16:24:17.476343326Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" returns successfully" Jan 29 16:24:17.479715 kubelet[3011]: I0129 16:24:17.479426 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0" Jan 29 16:24:17.481076 containerd[1498]: time="2025-01-29T16:24:17.481016022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:5,}" Jan 29 16:24:17.484346 containerd[1498]: time="2025-01-29T16:24:17.484136553Z" level=info msg="StopPodSandbox for \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\"" Jan 29 16:24:17.484854 containerd[1498]: time="2025-01-29T16:24:17.484520244Z" level=info msg="Ensure that sandbox c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0 in task-service has been cleanup successfully" Jan 29 16:24:17.484854 containerd[1498]: time="2025-01-29T16:24:17.484759291Z" level=info msg="TearDown network for sandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\" successfully" Jan 29 16:24:17.484854 containerd[1498]: time="2025-01-29T16:24:17.484775331Z" level=info msg="StopPodSandbox for \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\" returns successfully" Jan 29 16:24:17.485541 containerd[1498]: time="2025-01-29T16:24:17.485470512Z" level=info msg="StopPodSandbox for \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\"" Jan 29 16:24:17.486039 containerd[1498]: time="2025-01-29T16:24:17.486002447Z" level=info msg="TearDown network for sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\" successfully" Jan 29 16:24:17.486390 containerd[1498]: time="2025-01-29T16:24:17.486027288Z" level=info msg="StopPodSandbox for \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\" returns successfully" Jan 29 16:24:17.490306 containerd[1498]: time="2025-01-29T16:24:17.490071326Z" level=info msg="StopPodSandbox for \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\"" Jan 29 16:24:17.490306 containerd[1498]: time="2025-01-29T16:24:17.490192009Z" level=info msg="TearDown network for sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" successfully" Jan 29 16:24:17.490306 containerd[1498]: time="2025-01-29T16:24:17.490202889Z" level=info msg="StopPodSandbox for \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" returns successfully" Jan 29 16:24:17.492066 containerd[1498]: time="2025-01-29T16:24:17.491766135Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\"" Jan 29 16:24:17.494312 containerd[1498]: time="2025-01-29T16:24:17.494183125Z" level=info msg="TearDown network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" successfully" Jan 29 16:24:17.494312 containerd[1498]: time="2025-01-29T16:24:17.494239847Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" returns successfully" Jan 29 16:24:17.497850 containerd[1498]: time="2025-01-29T16:24:17.497237294Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\"" Jan 29 16:24:17.499083 containerd[1498]: time="2025-01-29T16:24:17.497449020Z" level=info msg="TearDown network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" successfully" Jan 29 16:24:17.499083 containerd[1498]: time="2025-01-29T16:24:17.498889862Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" returns successfully" Jan 29 16:24:17.500408 containerd[1498]: time="2025-01-29T16:24:17.500203781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:5,}" Jan 29 16:24:17.783051 containerd[1498]: time="2025-01-29T16:24:17.782984617Z" level=error msg="Failed to destroy network for sandbox \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.786301 containerd[1498]: time="2025-01-29T16:24:17.786042466Z" level=error msg="encountered an error cleaning up failed sandbox \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.786301 containerd[1498]: time="2025-01-29T16:24:17.786150030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.788848 kubelet[3011]: E0129 16:24:17.787529 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.788848 kubelet[3011]: E0129 16:24:17.787592 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:17.788848 kubelet[3011]: E0129 16:24:17.787613 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" Jan 29 16:24:17.789287 kubelet[3011]: E0129 16:24:17.787725 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589789ffcb-dfxmb_calico-system(7c160f15-f2fa-4241-9a8e-11ff1238cb71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" podUID="7c160f15-f2fa-4241-9a8e-11ff1238cb71" Jan 29 16:24:17.796808 containerd[1498]: time="2025-01-29T16:24:17.796277805Z" level=error msg="Failed to destroy network for sandbox \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.798864 containerd[1498]: time="2025-01-29T16:24:17.797292434Z" level=error msg="encountered an error cleaning up failed sandbox \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.798864 containerd[1498]: time="2025-01-29T16:24:17.797374356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.798986 kubelet[3011]: E0129 16:24:17.798117 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.798986 kubelet[3011]: E0129 16:24:17.798177 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:17.798986 kubelet[3011]: E0129 16:24:17.798199 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z22bk" Jan 29 16:24:17.799088 kubelet[3011]: E0129 16:24:17.798246 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z22bk_calico-system(bec95826-331e-47ab-a0cc-d4c3b56446fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z22bk" podUID="bec95826-331e-47ab-a0cc-d4c3b56446fd" Jan 29 16:24:17.835617 containerd[1498]: time="2025-01-29T16:24:17.835304701Z" level=error msg="Failed to destroy network for sandbox \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.837453 containerd[1498]: time="2025-01-29T16:24:17.837335880Z" level=error msg="encountered an error cleaning up failed sandbox \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.840096 containerd[1498]: time="2025-01-29T16:24:17.839986238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.840412 kubelet[3011]: E0129 16:24:17.840314 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.840412 kubelet[3011]: E0129 16:24:17.840372 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:17.840412 kubelet[3011]: E0129 16:24:17.840393 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" Jan 29 16:24:17.841477 kubelet[3011]: E0129 16:24:17.840436 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-94j9d_calico-apiserver(c7972d1b-4410-4bcb-97d1-bf72f6d2582a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" podUID="c7972d1b-4410-4bcb-97d1-bf72f6d2582a" Jan 29 16:24:17.855461 containerd[1498]: time="2025-01-29T16:24:17.855403007Z" level=error msg="Failed to destroy network for sandbox \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.857186 containerd[1498]: time="2025-01-29T16:24:17.857046255Z" level=error msg="encountered an error cleaning up failed sandbox \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.857330 containerd[1498]: time="2025-01-29T16:24:17.857225460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.857546 kubelet[3011]: E0129 16:24:17.857457 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.857546 kubelet[3011]: E0129 16:24:17.857516 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:17.857546 kubelet[3011]: E0129 16:24:17.857536 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" Jan 29 16:24:17.857804 kubelet[3011]: E0129 16:24:17.857585 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d75c7df-kdpvr_calico-apiserver(f934c577-e456-48e1-b8cb-5d9754694bac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" podUID="f934c577-e456-48e1-b8cb-5d9754694bac" Jan 29 16:24:17.881127 containerd[1498]: time="2025-01-29T16:24:17.881078035Z" level=error msg="Failed to destroy network for sandbox \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.881702 containerd[1498]: time="2025-01-29T16:24:17.881550008Z" level=error msg="encountered an error cleaning up failed sandbox \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.881702 containerd[1498]: time="2025-01-29T16:24:17.881616130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.882937 kubelet[3011]: E0129 16:24:17.882879 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.883066 kubelet[3011]: E0129 16:24:17.882952 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:17.883066 kubelet[3011]: E0129 16:24:17.882973 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x54mr" Jan 29 16:24:17.883066 kubelet[3011]: E0129 16:24:17.883017 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x54mr_kube-system(62f55e30-6d37-4cf6-83d1-f987ecffdd26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x54mr" podUID="62f55e30-6d37-4cf6-83d1-f987ecffdd26" Jan 29 16:24:17.895560 containerd[1498]: time="2025-01-29T16:24:17.887152251Z" level=error msg="Failed to destroy network for sandbox \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.896159 containerd[1498]: time="2025-01-29T16:24:17.896014350Z" level=error msg="encountered an error cleaning up failed sandbox \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.896492 containerd[1498]: time="2025-01-29T16:24:17.896339319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.897036 kubelet[3011]: E0129 16:24:17.896988 3011 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 16:24:17.897229 kubelet[3011]: E0129 16:24:17.897063 3011 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:17.897229 kubelet[3011]: E0129 16:24:17.897084 3011 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4b7lh" Jan 29 16:24:17.897229 kubelet[3011]: E0129 16:24:17.897121 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4b7lh_kube-system(bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4b7lh" podUID="bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10" Jan 29 16:24:17.934918 systemd[1]: run-netns-cni\x2da0afc9b5\x2ddc7c\x2d52d8\x2d394f\x2d3329134b1a55.mount: Deactivated successfully. Jan 29 16:24:17.935020 systemd[1]: run-netns-cni\x2d67a095fe\x2d1d4c\x2d2379\x2d07c5\x2dc2f80a20f652.mount: Deactivated successfully. Jan 29 16:24:17.938010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4267029006.mount: Deactivated successfully. Jan 29 16:24:17.975354 containerd[1498]: time="2025-01-29T16:24:17.975301699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:17.976692 containerd[1498]: time="2025-01-29T16:24:17.976502974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 16:24:17.978139 containerd[1498]: time="2025-01-29T16:24:17.978089420Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:17.984390 containerd[1498]: time="2025-01-29T16:24:17.984323522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:17.985939 containerd[1498]: time="2025-01-29T16:24:17.985603119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 5.876862076s" Jan 29 16:24:17.985939 containerd[1498]: time="2025-01-29T16:24:17.985687282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 16:24:18.012598 containerd[1498]: time="2025-01-29T16:24:18.012257338Z" level=info msg="CreateContainer within sandbox \"11da643b767ec93d61201551ba1c4921476802e387af4dc86954b04124877acd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 16:24:18.040686 containerd[1498]: time="2025-01-29T16:24:18.040499887Z" level=info msg="CreateContainer within sandbox \"11da643b767ec93d61201551ba1c4921476802e387af4dc86954b04124877acd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a565aaf660a58e4314bff6f47da6ba28297caf860daa54c6d318b5401c053d0c\"" Jan 29 16:24:18.042943 containerd[1498]: time="2025-01-29T16:24:18.042556587Z" level=info msg="StartContainer for \"a565aaf660a58e4314bff6f47da6ba28297caf860daa54c6d318b5401c053d0c\"" Jan 29 16:24:18.081066 systemd[1]: Started cri-containerd-a565aaf660a58e4314bff6f47da6ba28297caf860daa54c6d318b5401c053d0c.scope - libcontainer container a565aaf660a58e4314bff6f47da6ba28297caf860daa54c6d318b5401c053d0c. Jan 29 16:24:18.123908 containerd[1498]: time="2025-01-29T16:24:18.122574855Z" level=info msg="StartContainer for \"a565aaf660a58e4314bff6f47da6ba28297caf860daa54c6d318b5401c053d0c\" returns successfully" Jan 29 16:24:18.241173 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 16:24:18.241349 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 16:24:18.500485 kubelet[3011]: I0129 16:24:18.500437 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de" Jan 29 16:24:18.504014 containerd[1498]: time="2025-01-29T16:24:18.503899165Z" level=info msg="StopPodSandbox for \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\"" Jan 29 16:24:18.504494 containerd[1498]: time="2025-01-29T16:24:18.504108051Z" level=info msg="Ensure that sandbox 3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de in task-service has been cleanup successfully" Jan 29 16:24:18.507251 containerd[1498]: time="2025-01-29T16:24:18.507045577Z" level=info msg="TearDown network for sandbox \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\" successfully" Jan 29 16:24:18.507251 containerd[1498]: time="2025-01-29T16:24:18.507080338Z" level=info msg="StopPodSandbox for \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\" returns successfully" Jan 29 16:24:18.507897 containerd[1498]: time="2025-01-29T16:24:18.507864601Z" level=info msg="StopPodSandbox for \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\"" Jan 29 16:24:18.507990 containerd[1498]: time="2025-01-29T16:24:18.507976525Z" level=info msg="TearDown network for sandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\" successfully" Jan 29 16:24:18.508042 containerd[1498]: time="2025-01-29T16:24:18.507989685Z" level=info msg="StopPodSandbox for \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\" returns successfully" Jan 29 16:24:18.508625 containerd[1498]: time="2025-01-29T16:24:18.508601183Z" level=info msg="StopPodSandbox for \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\"" Jan 29 16:24:18.509238 containerd[1498]: time="2025-01-29T16:24:18.509214641Z" level=info msg="TearDown network for sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\" successfully" Jan 29 16:24:18.509319 containerd[1498]: time="2025-01-29T16:24:18.509238802Z" level=info msg="StopPodSandbox for \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\" returns successfully" Jan 29 16:24:18.511763 containerd[1498]: time="2025-01-29T16:24:18.511408265Z" level=info msg="StopPodSandbox for \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\"" Jan 29 16:24:18.511763 containerd[1498]: time="2025-01-29T16:24:18.511692234Z" level=info msg="TearDown network for sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" successfully" Jan 29 16:24:18.512127 containerd[1498]: time="2025-01-29T16:24:18.511709074Z" level=info msg="StopPodSandbox for \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" returns successfully" Jan 29 16:24:18.512882 kubelet[3011]: I0129 16:24:18.512761 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691" Jan 29 16:24:18.514341 containerd[1498]: time="2025-01-29T16:24:18.514240188Z" level=info msg="StopPodSandbox for \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\"" Jan 29 16:24:18.515024 containerd[1498]: time="2025-01-29T16:24:18.514432634Z" level=info msg="Ensure that sandbox 3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691 in task-service has been cleanup successfully" Jan 29 16:24:18.515024 containerd[1498]: time="2025-01-29T16:24:18.514985610Z" level=info msg="TearDown network for sandbox \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\" successfully" Jan 29 16:24:18.515024 containerd[1498]: time="2025-01-29T16:24:18.515011571Z" level=info msg="StopPodSandbox for \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\" returns successfully" Jan 29 16:24:18.515119 containerd[1498]: time="2025-01-29T16:24:18.514246509Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\"" Jan 29 16:24:18.515945 containerd[1498]: time="2025-01-29T16:24:18.515162455Z" level=info msg="TearDown network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" successfully" Jan 29 16:24:18.515945 containerd[1498]: time="2025-01-29T16:24:18.515178456Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" returns successfully" Jan 29 16:24:18.516285 containerd[1498]: time="2025-01-29T16:24:18.516246967Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\"" Jan 29 16:24:18.516649 containerd[1498]: time="2025-01-29T16:24:18.516536176Z" level=info msg="StopPodSandbox for \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\"" Jan 29 16:24:18.517174 containerd[1498]: time="2025-01-29T16:24:18.516978949Z" level=info msg="TearDown network for sandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\" successfully" Jan 29 16:24:18.517174 containerd[1498]: time="2025-01-29T16:24:18.516998789Z" level=info msg="StopPodSandbox for \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\" returns successfully" Jan 29 16:24:18.518860 containerd[1498]: time="2025-01-29T16:24:18.517094432Z" level=info msg="TearDown network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" successfully" Jan 29 16:24:18.518860 containerd[1498]: time="2025-01-29T16:24:18.517464163Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" returns successfully" Jan 29 16:24:18.521570 containerd[1498]: time="2025-01-29T16:24:18.521306156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:6,}" Jan 29 16:24:18.521570 containerd[1498]: time="2025-01-29T16:24:18.521417039Z" level=info msg="StopPodSandbox for \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\"" Jan 29 16:24:18.521570 containerd[1498]: time="2025-01-29T16:24:18.521499201Z" level=info msg="TearDown network for sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\" successfully" Jan 29 16:24:18.521570 containerd[1498]: time="2025-01-29T16:24:18.521509242Z" level=info msg="StopPodSandbox for \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\" returns successfully" Jan 29 16:24:18.523009 containerd[1498]: time="2025-01-29T16:24:18.522470310Z" level=info msg="StopPodSandbox for \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\"" Jan 29 16:24:18.523009 containerd[1498]: time="2025-01-29T16:24:18.522564993Z" level=info msg="TearDown network for sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" successfully" Jan 29 16:24:18.523009 containerd[1498]: time="2025-01-29T16:24:18.522574753Z" level=info msg="StopPodSandbox for \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" returns successfully" Jan 29 16:24:18.523307 containerd[1498]: time="2025-01-29T16:24:18.523210292Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\"" Jan 29 16:24:18.523307 containerd[1498]: time="2025-01-29T16:24:18.523287934Z" level=info msg="TearDown network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" successfully" Jan 29 16:24:18.523307 containerd[1498]: time="2025-01-29T16:24:18.523297494Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" returns successfully" Jan 29 16:24:18.527806 kubelet[3011]: I0129 16:24:18.525765 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756" Jan 29 16:24:18.527957 containerd[1498]: time="2025-01-29T16:24:18.526961522Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\"" Jan 29 16:24:18.527957 containerd[1498]: time="2025-01-29T16:24:18.527268931Z" level=info msg="StopPodSandbox for \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\"" Jan 29 16:24:18.527957 containerd[1498]: time="2025-01-29T16:24:18.527293891Z" level=info msg="TearDown network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" successfully" Jan 29 16:24:18.527957 containerd[1498]: time="2025-01-29T16:24:18.527307172Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" returns successfully" Jan 29 16:24:18.527957 containerd[1498]: time="2025-01-29T16:24:18.527439256Z" level=info msg="Ensure that sandbox 2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756 in task-service has been cleanup successfully" Jan 29 16:24:18.527957 containerd[1498]: time="2025-01-29T16:24:18.527642902Z" level=info msg="TearDown network for sandbox \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\" successfully" Jan 29 16:24:18.527957 containerd[1498]: time="2025-01-29T16:24:18.527720104Z" level=info msg="StopPodSandbox for \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\" returns successfully" Jan 29 16:24:18.533567 containerd[1498]: time="2025-01-29T16:24:18.533518554Z" level=info msg="StopPodSandbox for \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\"" Jan 29 16:24:18.533855 containerd[1498]: time="2025-01-29T16:24:18.533625557Z" level=info msg="TearDown network for sandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\" successfully" Jan 29 16:24:18.533855 containerd[1498]: time="2025-01-29T16:24:18.533636238Z" level=info msg="StopPodSandbox for \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\" returns successfully" Jan 29 16:24:18.533855 containerd[1498]: time="2025-01-29T16:24:18.533751601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:6,}" Jan 29 16:24:18.535868 containerd[1498]: time="2025-01-29T16:24:18.535373569Z" level=info msg="StopPodSandbox for \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\"" Jan 29 16:24:18.535868 containerd[1498]: time="2025-01-29T16:24:18.535469851Z" level=info msg="TearDown network for sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\" successfully" Jan 29 16:24:18.535868 containerd[1498]: time="2025-01-29T16:24:18.535479412Z" level=info msg="StopPodSandbox for \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\" returns successfully" Jan 29 16:24:18.538281 containerd[1498]: time="2025-01-29T16:24:18.537765599Z" level=info msg="StopPodSandbox for \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\"" Jan 29 16:24:18.538281 containerd[1498]: time="2025-01-29T16:24:18.537910923Z" level=info msg="TearDown network for sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" successfully" Jan 29 16:24:18.538281 containerd[1498]: time="2025-01-29T16:24:18.537922203Z" level=info msg="StopPodSandbox for \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" returns successfully" Jan 29 16:24:18.539790 containerd[1498]: time="2025-01-29T16:24:18.539648894Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\"" Jan 29 16:24:18.539899 containerd[1498]: time="2025-01-29T16:24:18.539836900Z" level=info msg="TearDown network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" successfully" Jan 29 16:24:18.539899 containerd[1498]: time="2025-01-29T16:24:18.539853620Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" returns successfully" Jan 29 16:24:18.542382 containerd[1498]: time="2025-01-29T16:24:18.540792048Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\"" Jan 29 16:24:18.542382 containerd[1498]: time="2025-01-29T16:24:18.540934252Z" level=info msg="TearDown network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" successfully" Jan 29 16:24:18.542382 containerd[1498]: time="2025-01-29T16:24:18.540944972Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" returns successfully" Jan 29 16:24:18.543210 containerd[1498]: time="2025-01-29T16:24:18.543165797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:6,}" Jan 29 16:24:18.551195 kubelet[3011]: I0129 16:24:18.551141 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7" Jan 29 16:24:18.552948 containerd[1498]: time="2025-01-29T16:24:18.552901803Z" level=info msg="StopPodSandbox for \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\"" Jan 29 16:24:18.553578 containerd[1498]: time="2025-01-29T16:24:18.553125649Z" level=info msg="Ensure that sandbox 3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7 in task-service has been cleanup successfully" Jan 29 16:24:18.553578 containerd[1498]: time="2025-01-29T16:24:18.553384697Z" level=info msg="TearDown network for sandbox \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\" successfully" Jan 29 16:24:18.553578 containerd[1498]: time="2025-01-29T16:24:18.553404218Z" level=info msg="StopPodSandbox for \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\" returns successfully" Jan 29 16:24:18.556140 containerd[1498]: time="2025-01-29T16:24:18.556094337Z" level=info msg="StopPodSandbox for \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\"" Jan 29 16:24:18.559730 containerd[1498]: time="2025-01-29T16:24:18.559624200Z" level=info msg="TearDown network for sandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\" successfully" Jan 29 16:24:18.561382 containerd[1498]: time="2025-01-29T16:24:18.560836916Z" level=info msg="StopPodSandbox for \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\" returns successfully" Jan 29 16:24:18.562503 containerd[1498]: time="2025-01-29T16:24:18.562220036Z" level=info msg="StopPodSandbox for \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\"" Jan 29 16:24:18.564595 containerd[1498]: time="2025-01-29T16:24:18.564346219Z" level=info msg="TearDown network for sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\" successfully" Jan 29 16:24:18.565169 containerd[1498]: time="2025-01-29T16:24:18.564501583Z" level=info msg="StopPodSandbox for \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\" returns successfully" Jan 29 16:24:18.566833 kubelet[3011]: I0129 16:24:18.566184 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05" Jan 29 16:24:18.574552 containerd[1498]: time="2025-01-29T16:24:18.571802918Z" level=info msg="StopPodSandbox for \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\"" Jan 29 16:24:18.575284 containerd[1498]: time="2025-01-29T16:24:18.575242578Z" level=info msg="Ensure that sandbox 3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05 in task-service has been cleanup successfully" Jan 29 16:24:18.575748 containerd[1498]: time="2025-01-29T16:24:18.575667311Z" level=info msg="StopPodSandbox for \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\"" Jan 29 16:24:18.575803 containerd[1498]: time="2025-01-29T16:24:18.575774274Z" level=info msg="TearDown network for sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" successfully" Jan 29 16:24:18.575803 containerd[1498]: time="2025-01-29T16:24:18.575786154Z" level=info msg="StopPodSandbox for \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" returns successfully" Jan 29 16:24:18.582879 containerd[1498]: time="2025-01-29T16:24:18.582519712Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\"" Jan 29 16:24:18.582879 containerd[1498]: time="2025-01-29T16:24:18.582645076Z" level=info msg="TearDown network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" successfully" Jan 29 16:24:18.582879 containerd[1498]: time="2025-01-29T16:24:18.582714678Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" returns successfully" Jan 29 16:24:18.587441 containerd[1498]: time="2025-01-29T16:24:18.584798019Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\"" Jan 29 16:24:18.589154 containerd[1498]: time="2025-01-29T16:24:18.589096705Z" level=info msg="TearDown network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" successfully" Jan 29 16:24:18.589531 containerd[1498]: time="2025-01-29T16:24:18.589508437Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" returns successfully" Jan 29 16:24:18.589875 containerd[1498]: time="2025-01-29T16:24:18.586543910Z" level=info msg="TearDown network for sandbox \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\" successfully" Jan 29 16:24:18.589962 containerd[1498]: time="2025-01-29T16:24:18.589947930Z" level=info msg="StopPodSandbox for \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\" returns successfully" Jan 29 16:24:18.593455 containerd[1498]: time="2025-01-29T16:24:18.593244867Z" level=info msg="StopPodSandbox for \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\"" Jan 29 16:24:18.593455 containerd[1498]: time="2025-01-29T16:24:18.593350190Z" level=info msg="TearDown network for sandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\" successfully" Jan 29 16:24:18.593455 containerd[1498]: time="2025-01-29T16:24:18.593362510Z" level=info msg="StopPodSandbox for \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\" returns successfully" Jan 29 16:24:18.593455 containerd[1498]: time="2025-01-29T16:24:18.593451593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:6,}" Jan 29 16:24:18.595716 containerd[1498]: time="2025-01-29T16:24:18.595580855Z" level=info msg="StopPodSandbox for \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\"" Jan 29 16:24:18.598583 containerd[1498]: time="2025-01-29T16:24:18.598502181Z" level=info msg="TearDown network for sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\" successfully" Jan 29 16:24:18.598583 containerd[1498]: time="2025-01-29T16:24:18.598552262Z" level=info msg="StopPodSandbox for \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\" returns successfully" Jan 29 16:24:18.600220 kubelet[3011]: I0129 16:24:18.599282 3011 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043" Jan 29 16:24:18.604599 containerd[1498]: time="2025-01-29T16:24:18.604546078Z" level=info msg="StopPodSandbox for \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\"" Jan 29 16:24:18.604837 containerd[1498]: time="2025-01-29T16:24:18.604751924Z" level=info msg="TearDown network for sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" successfully" Jan 29 16:24:18.604837 containerd[1498]: time="2025-01-29T16:24:18.604772885Z" level=info msg="StopPodSandbox for \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" returns successfully" Jan 29 16:24:18.604904 containerd[1498]: time="2025-01-29T16:24:18.604876168Z" level=info msg="StopPodSandbox for \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\"" Jan 29 16:24:18.605101 containerd[1498]: time="2025-01-29T16:24:18.605031293Z" level=info msg="Ensure that sandbox 6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043 in task-service has been cleanup successfully" Jan 29 16:24:18.608063 containerd[1498]: time="2025-01-29T16:24:18.607937258Z" level=info msg="TearDown network for sandbox \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\" successfully" Jan 29 16:24:18.608063 containerd[1498]: time="2025-01-29T16:24:18.608034341Z" level=info msg="StopPodSandbox for \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\" returns successfully" Jan 29 16:24:18.608566 containerd[1498]: time="2025-01-29T16:24:18.608248867Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\"" Jan 29 16:24:18.614837 containerd[1498]: time="2025-01-29T16:24:18.612021898Z" level=info msg="TearDown network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" successfully" Jan 29 16:24:18.614837 containerd[1498]: time="2025-01-29T16:24:18.612088860Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" returns successfully" Jan 29 16:24:18.615716 containerd[1498]: time="2025-01-29T16:24:18.615511760Z" level=info msg="StopPodSandbox for \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\"" Jan 29 16:24:18.616155 containerd[1498]: time="2025-01-29T16:24:18.616132258Z" level=info msg="TearDown network for sandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\" successfully" Jan 29 16:24:18.616780 containerd[1498]: time="2025-01-29T16:24:18.616706075Z" level=info msg="StopPodSandbox for \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\" returns successfully" Jan 29 16:24:18.617948 containerd[1498]: time="2025-01-29T16:24:18.617141008Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\"" Jan 29 16:24:18.617948 containerd[1498]: time="2025-01-29T16:24:18.617264052Z" level=info msg="TearDown network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" successfully" Jan 29 16:24:18.617948 containerd[1498]: time="2025-01-29T16:24:18.617275452Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" returns successfully" Jan 29 16:24:18.618578 containerd[1498]: time="2025-01-29T16:24:18.618543209Z" level=info msg="StopPodSandbox for \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\"" Jan 29 16:24:18.619169 containerd[1498]: time="2025-01-29T16:24:18.619123146Z" level=info msg="TearDown network for sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\" successfully" Jan 29 16:24:18.619456 containerd[1498]: time="2025-01-29T16:24:18.619429395Z" level=info msg="StopPodSandbox for \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\" returns successfully" Jan 29 16:24:18.620636 containerd[1498]: time="2025-01-29T16:24:18.619347673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:6,}" Jan 29 16:24:18.622327 containerd[1498]: time="2025-01-29T16:24:18.621176966Z" level=info msg="StopPodSandbox for \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\"" Jan 29 16:24:18.623762 containerd[1498]: time="2025-01-29T16:24:18.623728641Z" level=info msg="TearDown network for sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" successfully" Jan 29 16:24:18.624415 containerd[1498]: time="2025-01-29T16:24:18.623899446Z" level=info msg="StopPodSandbox for \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" returns successfully" Jan 29 16:24:18.625522 containerd[1498]: time="2025-01-29T16:24:18.624779072Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\"" Jan 29 16:24:18.625522 containerd[1498]: time="2025-01-29T16:24:18.624903956Z" level=info msg="TearDown network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" successfully" Jan 29 16:24:18.625522 containerd[1498]: time="2025-01-29T16:24:18.624917556Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" returns successfully" Jan 29 16:24:18.632989 containerd[1498]: time="2025-01-29T16:24:18.632325774Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\"" Jan 29 16:24:18.633746 containerd[1498]: time="2025-01-29T16:24:18.633620572Z" level=info msg="TearDown network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" successfully" Jan 29 16:24:18.633746 containerd[1498]: time="2025-01-29T16:24:18.633647892Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" returns successfully" Jan 29 16:24:18.637081 containerd[1498]: time="2025-01-29T16:24:18.637000271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:6,}" Jan 29 16:24:18.957016 systemd[1]: run-netns-cni\x2d3dcae7a1\x2d7a8c\x2d38cc\x2dd3cd\x2d90ac4211da2a.mount: Deactivated successfully. Jan 29 16:24:18.957624 systemd[1]: run-netns-cni\x2d904fdd6b\x2df92c\x2dc089\x2d40e6\x2d67192955848c.mount: Deactivated successfully. Jan 29 16:24:18.957750 systemd[1]: run-netns-cni\x2d65b82427\x2d40f4\x2dc9e8\x2d7563\x2d5cd8b59b6d4a.mount: Deactivated successfully. Jan 29 16:24:18.957805 systemd[1]: run-netns-cni\x2d81b19906\x2d6b0d\x2dcd50\x2d4ee1\x2dae60a66aa23d.mount: Deactivated successfully. Jan 29 16:24:18.957906 systemd[1]: run-netns-cni\x2d6261226f\x2d70fd\x2d6f28\x2d332b\x2dfbc29eafb973.mount: Deactivated successfully. Jan 29 16:24:18.957955 systemd[1]: run-netns-cni\x2de203f47d\x2dfb27\x2de10c\x2d8165\x2d208cbfc79362.mount: Deactivated successfully. Jan 29 16:24:19.295966 systemd-networkd[1393]: cali486bb08f81f: Link UP Jan 29 16:24:19.297330 systemd-networkd[1393]: cali486bb08f81f: Gained carrier Jan 29 16:24:19.325659 kubelet[3011]: I0129 16:24:19.325329 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6mswg" podStartSLOduration=2.113431857 podStartE2EDuration="16.325304137s" podCreationTimestamp="2025-01-29 16:24:03 +0000 UTC" firstStartedPulling="2025-01-29 16:24:03.776962973 +0000 UTC m=+24.986388004" lastFinishedPulling="2025-01-29 16:24:17.988835213 +0000 UTC m=+39.198260284" observedRunningTime="2025-01-29 16:24:18.706581273 +0000 UTC m=+39.916006344" watchObservedRunningTime="2025-01-29 16:24:19.325304137 +0000 UTC m=+40.534729248" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:18.595 [INFO][4895] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:18.731 [INFO][4895] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0 coredns-7db6d8ff4d- kube-system bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10 692 0 2025-01-29 16:23:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230-0-0-e-139a7b6c18 coredns-7db6d8ff4d-4b7lh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali486bb08f81f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7lh" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:18.732 [INFO][4895] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7lh" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.111 [INFO][4942] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" HandleID="k8s-pod-network.270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" Workload="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.160 [INFO][4942] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" HandleID="k8s-pod-network.270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" Workload="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230-0-0-e-139a7b6c18", "pod":"coredns-7db6d8ff4d-4b7lh", "timestamp":"2025-01-29 16:24:19.111429616 +0000 UTC"}, Hostname:"ci-4230-0-0-e-139a7b6c18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.161 [INFO][4942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.163 [INFO][4942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.164 [INFO][4942] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-0-e-139a7b6c18' Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.174 [INFO][4942] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.196 [INFO][4942] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.222 [INFO][4942] ipam/ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.229 [INFO][4942] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.246 [INFO][4942] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.248 [INFO][4942] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.251 [INFO][4942] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99 Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.263 [INFO][4942] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.273 [INFO][4942] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.65/26] block=192.168.65.64/26 handle="k8s-pod-network.270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.273 [INFO][4942] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.65/26] handle="k8s-pod-network.270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.273 [INFO][4942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:24:19.328313 containerd[1498]: 2025-01-29 16:24:19.273 [INFO][4942] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.65/26] IPv6=[] ContainerID="270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" HandleID="k8s-pod-network.270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" Workload="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0" Jan 29 16:24:19.329196 containerd[1498]: 2025-01-29 16:24:19.279 [INFO][4895] cni-plugin/k8s.go 386: Populated endpoint ContainerID="270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7lh" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 23, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"", Pod:"coredns-7db6d8ff4d-4b7lh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali486bb08f81f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.329196 containerd[1498]: 2025-01-29 16:24:19.279 [INFO][4895] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.65/32] ContainerID="270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7lh" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0" Jan 29 16:24:19.329196 containerd[1498]: 2025-01-29 16:24:19.279 [INFO][4895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali486bb08f81f ContainerID="270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7lh" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0" Jan 29 16:24:19.329196 containerd[1498]: 2025-01-29 16:24:19.299 [INFO][4895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7lh" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0" Jan 29 16:24:19.329196 containerd[1498]: 2025-01-29 16:24:19.300 [INFO][4895] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7lh" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10", ResourceVersion:"692", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 23, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99", Pod:"coredns-7db6d8ff4d-4b7lh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali486bb08f81f", MAC:"5e:56:5d:2a:eb:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.329196 containerd[1498]: 2025-01-29 16:24:19.322 [INFO][4895] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4b7lh" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--4b7lh-eth0" Jan 29 16:24:19.357031 systemd-networkd[1393]: calid9c8ecb5452: Link UP Jan 29 16:24:19.360874 systemd-networkd[1393]: calid9c8ecb5452: Gained carrier Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:18.801 [INFO][4906] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:18.896 [INFO][4906] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0 calico-apiserver-5d75c7df- calico-apiserver c7972d1b-4410-4bcb-97d1-bf72f6d2582a 689 0 2025-01-29 16:24:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d75c7df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230-0-0-e-139a7b6c18 calico-apiserver-5d75c7df-94j9d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid9c8ecb5452 [] []}} ContainerID="1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-94j9d" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:18.898 [INFO][4906] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-94j9d" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.119 [INFO][4970] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" HandleID="k8s-pod-network.1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" Workload="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.175 [INFO][4970] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" HandleID="k8s-pod-network.1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" Workload="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230-0-0-e-139a7b6c18", "pod":"calico-apiserver-5d75c7df-94j9d", "timestamp":"2025-01-29 16:24:19.119559496 +0000 UTC"}, Hostname:"ci-4230-0-0-e-139a7b6c18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.175 [INFO][4970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.276 [INFO][4970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.276 [INFO][4970] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-0-e-139a7b6c18' Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.279 [INFO][4970] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.293 [INFO][4970] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.306 [INFO][4970] ipam/ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.315 [INFO][4970] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.325 [INFO][4970] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.325 [INFO][4970] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.330 [INFO][4970] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695 Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.341 [INFO][4970] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.351 [INFO][4970] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.66/26] block=192.168.65.64/26 handle="k8s-pod-network.1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.351 [INFO][4970] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.66/26] handle="k8s-pod-network.1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.351 [INFO][4970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:24:19.389153 containerd[1498]: 2025-01-29 16:24:19.351 [INFO][4970] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.66/26] IPv6=[] ContainerID="1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" HandleID="k8s-pod-network.1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" Workload="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0" Jan 29 16:24:19.390534 containerd[1498]: 2025-01-29 16:24:19.354 [INFO][4906] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-94j9d" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0", GenerateName:"calico-apiserver-5d75c7df-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7972d1b-4410-4bcb-97d1-bf72f6d2582a", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 24, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d75c7df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"", Pod:"calico-apiserver-5d75c7df-94j9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid9c8ecb5452", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.390534 containerd[1498]: 2025-01-29 16:24:19.354 [INFO][4906] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.66/32] ContainerID="1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-94j9d" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0" Jan 29 16:24:19.390534 containerd[1498]: 2025-01-29 16:24:19.354 [INFO][4906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9c8ecb5452 ContainerID="1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-94j9d" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0" Jan 29 16:24:19.390534 containerd[1498]: 2025-01-29 16:24:19.356 [INFO][4906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-94j9d" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0" Jan 29 16:24:19.390534 containerd[1498]: 2025-01-29 16:24:19.357 [INFO][4906] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-94j9d" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0", GenerateName:"calico-apiserver-5d75c7df-", Namespace:"calico-apiserver", SelfLink:"", UID:"c7972d1b-4410-4bcb-97d1-bf72f6d2582a", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 24, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d75c7df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695", Pod:"calico-apiserver-5d75c7df-94j9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid9c8ecb5452", MAC:"52:f3:b4:1d:e8:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.390534 containerd[1498]: 2025-01-29 16:24:19.382 [INFO][4906] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-94j9d" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--94j9d-eth0" Jan 29 16:24:19.412026 containerd[1498]: time="2025-01-29T16:24:19.411268478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:19.412026 containerd[1498]: time="2025-01-29T16:24:19.411386641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:19.412026 containerd[1498]: time="2025-01-29T16:24:19.411412842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:19.412026 containerd[1498]: time="2025-01-29T16:24:19.411627128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:19.450065 systemd[1]: Started cri-containerd-270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99.scope - libcontainer container 270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99. Jan 29 16:24:19.469891 systemd-networkd[1393]: cali2b3ad4a2a52: Link UP Jan 29 16:24:19.470378 systemd-networkd[1393]: cali2b3ad4a2a52: Gained carrier Jan 29 16:24:19.472793 containerd[1498]: time="2025-01-29T16:24:19.472472126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:19.472793 containerd[1498]: time="2025-01-29T16:24:19.472540128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:19.472793 containerd[1498]: time="2025-01-29T16:24:19.472554649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:19.472793 containerd[1498]: time="2025-01-29T16:24:19.472647172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:18.972 [INFO][4938] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.049 [INFO][4938] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0 calico-apiserver-5d75c7df- calico-apiserver f934c577-e456-48e1-b8cb-5d9754694bac 690 0 2025-01-29 16:24:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d75c7df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4230-0-0-e-139a7b6c18 calico-apiserver-5d75c7df-kdpvr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2b3ad4a2a52 [] []}} ContainerID="17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-kdpvr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.050 [INFO][4938] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-kdpvr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.181 [INFO][4991] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" HandleID="k8s-pod-network.17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" Workload="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.221 [INFO][4991] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" HandleID="k8s-pod-network.17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" Workload="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400046a1b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4230-0-0-e-139a7b6c18", "pod":"calico-apiserver-5d75c7df-kdpvr", "timestamp":"2025-01-29 16:24:19.181732694 +0000 UTC"}, Hostname:"ci-4230-0-0-e-139a7b6c18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.221 [INFO][4991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.352 [INFO][4991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.353 [INFO][4991] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-0-e-139a7b6c18' Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.362 [INFO][4991] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.385 [INFO][4991] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.399 [INFO][4991] ipam/ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.403 [INFO][4991] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.407 [INFO][4991] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.408 [INFO][4991] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.411 [INFO][4991] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.426 [INFO][4991] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.445 [INFO][4991] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.67/26] block=192.168.65.64/26 handle="k8s-pod-network.17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.445 [INFO][4991] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.67/26] handle="k8s-pod-network.17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.445 [INFO][4991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:24:19.512464 containerd[1498]: 2025-01-29 16:24:19.445 [INFO][4991] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.67/26] IPv6=[] ContainerID="17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" HandleID="k8s-pod-network.17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" Workload="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0" Jan 29 16:24:19.513726 containerd[1498]: 2025-01-29 16:24:19.452 [INFO][4938] cni-plugin/k8s.go 386: Populated endpoint ContainerID="17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-kdpvr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0", GenerateName:"calico-apiserver-5d75c7df-", Namespace:"calico-apiserver", SelfLink:"", UID:"f934c577-e456-48e1-b8cb-5d9754694bac", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 24, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d75c7df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"", Pod:"calico-apiserver-5d75c7df-kdpvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b3ad4a2a52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.513726 containerd[1498]: 2025-01-29 16:24:19.454 [INFO][4938] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.67/32] ContainerID="17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-kdpvr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0" Jan 29 16:24:19.513726 containerd[1498]: 2025-01-29 16:24:19.457 [INFO][4938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b3ad4a2a52 ContainerID="17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-kdpvr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0" Jan 29 16:24:19.513726 containerd[1498]: 2025-01-29 16:24:19.468 [INFO][4938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-kdpvr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0" Jan 29 16:24:19.513726 containerd[1498]: 2025-01-29 16:24:19.469 [INFO][4938] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-kdpvr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0", GenerateName:"calico-apiserver-5d75c7df-", Namespace:"calico-apiserver", SelfLink:"", UID:"f934c577-e456-48e1-b8cb-5d9754694bac", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 24, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d75c7df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce", Pod:"calico-apiserver-5d75c7df-kdpvr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b3ad4a2a52", MAC:"0a:44:17:15:4d:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.513726 containerd[1498]: 2025-01-29 16:24:19.506 [INFO][4938] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce" Namespace="calico-apiserver" Pod="calico-apiserver-5d75c7df-kdpvr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--apiserver--5d75c7df--kdpvr-eth0" Jan 29 16:24:19.536054 systemd[1]: Started cri-containerd-1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695.scope - libcontainer container 1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695. Jan 29 16:24:19.569050 containerd[1498]: time="2025-01-29T16:24:19.568432123Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:19.570988 containerd[1498]: time="2025-01-29T16:24:19.570543865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:19.576005 containerd[1498]: time="2025-01-29T16:24:19.574470901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:19.577470 containerd[1498]: time="2025-01-29T16:24:19.576354677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:19.581213 containerd[1498]: time="2025-01-29T16:24:19.581153539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4b7lh,Uid:bb0afd5a-9d39-4388-bdc4-9dfe26dc7d10,Namespace:kube-system,Attempt:6,} returns sandbox id \"270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99\"" Jan 29 16:24:19.596735 containerd[1498]: time="2025-01-29T16:24:19.596685198Z" level=info msg="CreateContainer within sandbox \"270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:24:19.619466 systemd-networkd[1393]: cali50cc6293cb8: Link UP Jan 29 16:24:19.624249 systemd-networkd[1393]: cali50cc6293cb8: Gained carrier Jan 29 16:24:19.643330 containerd[1498]: time="2025-01-29T16:24:19.642561273Z" level=info msg="CreateContainer within sandbox \"270249c89280cc67550d98cfef87aafd9955b3459d7d291907c78a72535bbb99\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6865ede4652881680dd68d73fc263139f24b3703fe0b022d2765fe2e04010d3\"" Jan 29 16:24:19.647191 containerd[1498]: time="2025-01-29T16:24:19.646855720Z" level=info msg="StartContainer for \"b6865ede4652881680dd68d73fc263139f24b3703fe0b022d2765fe2e04010d3\"" Jan 29 16:24:19.660016 kubelet[3011]: I0129 16:24:19.659942 3011 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:24:19.664116 systemd[1]: Started cri-containerd-17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce.scope - libcontainer container 17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce. Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:18.760 [INFO][4908] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:18.913 [INFO][4908] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0 csi-node-driver- calico-system bec95826-331e-47ab-a0cc-d4c3b56446fd 610 0 2025-01-29 16:24:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4230-0-0-e-139a7b6c18 csi-node-driver-z22bk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali50cc6293cb8 [] []}} ContainerID="97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" Namespace="calico-system" Pod="csi-node-driver-z22bk" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:18.913 [INFO][4908] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" Namespace="calico-system" Pod="csi-node-driver-z22bk" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.223 [INFO][4975] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" HandleID="k8s-pod-network.97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" Workload="ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.256 [INFO][4975] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" HandleID="k8s-pod-network.97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" Workload="ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ba0e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230-0-0-e-139a7b6c18", "pod":"csi-node-driver-z22bk", "timestamp":"2025-01-29 16:24:19.223722575 +0000 UTC"}, Hostname:"ci-4230-0-0-e-139a7b6c18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.257 [INFO][4975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.445 [INFO][4975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.446 [INFO][4975] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-0-e-139a7b6c18' Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.453 [INFO][4975] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.479 [INFO][4975] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.525 [INFO][4975] ipam/ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.533 [INFO][4975] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.541 [INFO][4975] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.541 [INFO][4975] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.547 [INFO][4975] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.562 [INFO][4975] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.578 [INFO][4975] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.68/26] block=192.168.65.64/26 handle="k8s-pod-network.97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.578 [INFO][4975] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.68/26] handle="k8s-pod-network.97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.578 [INFO][4975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:24:19.673655 containerd[1498]: 2025-01-29 16:24:19.578 [INFO][4975] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.68/26] IPv6=[] ContainerID="97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" HandleID="k8s-pod-network.97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" Workload="ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0" Jan 29 16:24:19.675180 containerd[1498]: 2025-01-29 16:24:19.588 [INFO][4908] cni-plugin/k8s.go 386: Populated endpoint ContainerID="97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" Namespace="calico-system" Pod="csi-node-driver-z22bk" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bec95826-331e-47ab-a0cc-d4c3b56446fd", ResourceVersion:"610", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 24, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"", Pod:"csi-node-driver-z22bk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50cc6293cb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.675180 containerd[1498]: 2025-01-29 16:24:19.589 [INFO][4908] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.68/32] ContainerID="97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" Namespace="calico-system" Pod="csi-node-driver-z22bk" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0" Jan 29 16:24:19.675180 containerd[1498]: 2025-01-29 16:24:19.589 [INFO][4908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50cc6293cb8 ContainerID="97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" Namespace="calico-system" Pod="csi-node-driver-z22bk" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0" Jan 29 16:24:19.675180 containerd[1498]: 2025-01-29 16:24:19.632 [INFO][4908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" Namespace="calico-system" Pod="csi-node-driver-z22bk" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0" Jan 29 16:24:19.675180 containerd[1498]: 2025-01-29 16:24:19.634 [INFO][4908] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" Namespace="calico-system" Pod="csi-node-driver-z22bk" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bec95826-331e-47ab-a0cc-d4c3b56446fd", ResourceVersion:"610", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 24, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e", Pod:"csi-node-driver-z22bk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.65.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50cc6293cb8", MAC:"9e:78:98:69:5b:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.675180 containerd[1498]: 2025-01-29 16:24:19.663 [INFO][4908] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e" Namespace="calico-system" Pod="csi-node-driver-z22bk" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-csi--node--driver--z22bk-eth0" Jan 29 16:24:19.681588 containerd[1498]: time="2025-01-29T16:24:19.681540545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-94j9d,Uid:c7972d1b-4410-4bcb-97d1-bf72f6d2582a,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695\"" Jan 29 16:24:19.688712 containerd[1498]: time="2025-01-29T16:24:19.688403428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 16:24:19.730726 systemd-networkd[1393]: cali28b03e2f5c2: Link UP Jan 29 16:24:19.730916 systemd[1]: Started cri-containerd-b6865ede4652881680dd68d73fc263139f24b3703fe0b022d2765fe2e04010d3.scope - libcontainer container b6865ede4652881680dd68d73fc263139f24b3703fe0b022d2765fe2e04010d3. Jan 29 16:24:19.732927 systemd-networkd[1393]: cali28b03e2f5c2: Gained carrier Jan 29 16:24:19.763570 containerd[1498]: time="2025-01-29T16:24:19.763414525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:19.763570 containerd[1498]: time="2025-01-29T16:24:19.763499088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:19.763570 containerd[1498]: time="2025-01-29T16:24:19.763516648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:19.764169 containerd[1498]: time="2025-01-29T16:24:19.763636972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.013 [INFO][4945] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.077 [INFO][4945] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0 coredns-7db6d8ff4d- kube-system 62f55e30-6d37-4cf6-83d1-f987ecffdd26 684 0 2025-01-29 16:23:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4230-0-0-e-139a7b6c18 coredns-7db6d8ff4d-x54mr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28b03e2f5c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x54mr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.077 [INFO][4945] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x54mr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.242 [INFO][4983] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" HandleID="k8s-pod-network.12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" Workload="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.270 [INFO][4983] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" HandleID="k8s-pod-network.12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" Workload="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000350a60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4230-0-0-e-139a7b6c18", "pod":"coredns-7db6d8ff4d-x54mr", "timestamp":"2025-01-29 16:24:19.242301644 +0000 UTC"}, Hostname:"ci-4230-0-0-e-139a7b6c18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.270 [INFO][4983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.578 [INFO][4983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.578 [INFO][4983] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-0-e-139a7b6c18' Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.595 [INFO][4983] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.619 [INFO][4983] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.654 [INFO][4983] ipam/ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.667 [INFO][4983] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.676 [INFO][4983] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.676 [INFO][4983] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.683 [INFO][4983] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8 Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.703 [INFO][4983] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.721 [INFO][4983] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.69/26] block=192.168.65.64/26 handle="k8s-pod-network.12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.722 [INFO][4983] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.69/26] handle="k8s-pod-network.12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.722 [INFO][4983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:24:19.776818 containerd[1498]: 2025-01-29 16:24:19.722 [INFO][4983] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.69/26] IPv6=[] ContainerID="12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" HandleID="k8s-pod-network.12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" Workload="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0" Jan 29 16:24:19.777370 containerd[1498]: 2025-01-29 16:24:19.728 [INFO][4945] cni-plugin/k8s.go 386: Populated endpoint ContainerID="12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x54mr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"62f55e30-6d37-4cf6-83d1-f987ecffdd26", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 23, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"", Pod:"coredns-7db6d8ff4d-x54mr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28b03e2f5c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.777370 containerd[1498]: 2025-01-29 16:24:19.728 [INFO][4945] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.69/32] ContainerID="12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x54mr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0" Jan 29 16:24:19.777370 containerd[1498]: 2025-01-29 16:24:19.728 [INFO][4945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28b03e2f5c2 ContainerID="12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x54mr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0" Jan 29 16:24:19.777370 containerd[1498]: 2025-01-29 16:24:19.734 [INFO][4945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x54mr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0" Jan 29 16:24:19.777370 containerd[1498]: 2025-01-29 16:24:19.734 [INFO][4945] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x54mr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"62f55e30-6d37-4cf6-83d1-f987ecffdd26", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 23, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8", Pod:"coredns-7db6d8ff4d-x54mr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28b03e2f5c2", MAC:"de:34:6d:6b:c7:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.777370 containerd[1498]: 2025-01-29 16:24:19.771 [INFO][4945] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x54mr" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-coredns--7db6d8ff4d--x54mr-eth0" Jan 29 16:24:19.801255 systemd[1]: Started cri-containerd-97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e.scope - libcontainer container 97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e. Jan 29 16:24:19.835895 containerd[1498]: time="2025-01-29T16:24:19.833320911Z" level=info msg="StartContainer for \"b6865ede4652881680dd68d73fc263139f24b3703fe0b022d2765fe2e04010d3\" returns successfully" Jan 29 16:24:19.864525 containerd[1498]: time="2025-01-29T16:24:19.864474512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d75c7df-kdpvr,Uid:f934c577-e456-48e1-b8cb-5d9754694bac,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce\"" Jan 29 16:24:19.879356 systemd-networkd[1393]: calib100194e62a: Link UP Jan 29 16:24:19.884455 systemd-networkd[1393]: calib100194e62a: Gained carrier Jan 29 16:24:19.903215 containerd[1498]: time="2025-01-29T16:24:19.903132975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z22bk,Uid:bec95826-331e-47ab-a0cc-d4c3b56446fd,Namespace:calico-system,Attempt:6,} returns sandbox id \"97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e\"" Jan 29 16:24:19.905006 containerd[1498]: time="2025-01-29T16:24:19.904656500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:19.905006 containerd[1498]: time="2025-01-29T16:24:19.904794824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:19.905006 containerd[1498]: time="2025-01-29T16:24:19.904839665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:19.906698 containerd[1498]: time="2025-01-29T16:24:19.905800093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:18.919 [INFO][4925] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.063 [INFO][4925] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0 calico-kube-controllers-589789ffcb- calico-system 7c160f15-f2fa-4241-9a8e-11ff1238cb71 691 0 2025-01-29 16:24:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:589789ffcb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4230-0-0-e-139a7b6c18 calico-kube-controllers-589789ffcb-dfxmb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib100194e62a [] []}} ContainerID="8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" Namespace="calico-system" Pod="calico-kube-controllers-589789ffcb-dfxmb" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.064 [INFO][4925] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" Namespace="calico-system" Pod="calico-kube-controllers-589789ffcb-dfxmb" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.245 [INFO][4982] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" HandleID="k8s-pod-network.8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" Workload="ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.278 [INFO][4982] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" HandleID="k8s-pod-network.8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" Workload="ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000383910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4230-0-0-e-139a7b6c18", "pod":"calico-kube-controllers-589789ffcb-dfxmb", "timestamp":"2025-01-29 16:24:19.245512619 +0000 UTC"}, Hostname:"ci-4230-0-0-e-139a7b6c18", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.279 [INFO][4982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.722 [INFO][4982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.723 [INFO][4982] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4230-0-0-e-139a7b6c18' Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.728 [INFO][4982] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.756 [INFO][4982] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.782 [INFO][4982] ipam/ipam.go 489: Trying affinity for 192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.788 [INFO][4982] ipam/ipam.go 155: Attempting to load block cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.797 [INFO][4982] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.64/26 host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.798 [INFO][4982] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.64/26 handle="k8s-pod-network.8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.800 [INFO][4982] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87 Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.813 [INFO][4982] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.65.64/26 handle="k8s-pod-network.8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.850 [INFO][4982] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.65.70/26] block=192.168.65.64/26 handle="k8s-pod-network.8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.855 [INFO][4982] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.70/26] handle="k8s-pod-network.8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" host="ci-4230-0-0-e-139a7b6c18" Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.857 [INFO][4982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 16:24:19.917264 containerd[1498]: 2025-01-29 16:24:19.857 [INFO][4982] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.65.70/26] IPv6=[] ContainerID="8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" HandleID="k8s-pod-network.8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" Workload="ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0" Jan 29 16:24:19.917933 containerd[1498]: 2025-01-29 16:24:19.871 [INFO][4925] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" Namespace="calico-system" Pod="calico-kube-controllers-589789ffcb-dfxmb" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0", GenerateName:"calico-kube-controllers-589789ffcb-", Namespace:"calico-system", SelfLink:"", UID:"7c160f15-f2fa-4241-9a8e-11ff1238cb71", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 24, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589789ffcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"", Pod:"calico-kube-controllers-589789ffcb-dfxmb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib100194e62a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.917933 containerd[1498]: 2025-01-29 16:24:19.872 [INFO][4925] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.65.70/32] ContainerID="8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" Namespace="calico-system" Pod="calico-kube-controllers-589789ffcb-dfxmb" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0" Jan 29 16:24:19.917933 containerd[1498]: 2025-01-29 16:24:19.872 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib100194e62a ContainerID="8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" Namespace="calico-system" Pod="calico-kube-controllers-589789ffcb-dfxmb" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0" Jan 29 16:24:19.917933 containerd[1498]: 2025-01-29 16:24:19.889 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" Namespace="calico-system" Pod="calico-kube-controllers-589789ffcb-dfxmb" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0" Jan 29 16:24:19.917933 containerd[1498]: 2025-01-29 16:24:19.893 [INFO][4925] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" Namespace="calico-system" Pod="calico-kube-controllers-589789ffcb-dfxmb" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0", GenerateName:"calico-kube-controllers-589789ffcb-", Namespace:"calico-system", SelfLink:"", UID:"7c160f15-f2fa-4241-9a8e-11ff1238cb71", ResourceVersion:"691", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 16, 24, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589789ffcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4230-0-0-e-139a7b6c18", ContainerID:"8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87", Pod:"calico-kube-controllers-589789ffcb-dfxmb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib100194e62a", MAC:"a6:1e:2a:17:d3:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 16:24:19.917933 containerd[1498]: 2025-01-29 16:24:19.908 [INFO][4925] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87" Namespace="calico-system" Pod="calico-kube-controllers-589789ffcb-dfxmb" WorkloadEndpoint="ci--4230--0--0--e--139a7b6c18-k8s-calico--kube--controllers--589789ffcb--dfxmb-eth0" Jan 29 16:24:19.933098 systemd[1]: Started cri-containerd-12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8.scope - libcontainer container 12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8. Jan 29 16:24:19.982980 containerd[1498]: time="2025-01-29T16:24:19.982845931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 16:24:19.982980 containerd[1498]: time="2025-01-29T16:24:19.982941533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 16:24:19.983517 containerd[1498]: time="2025-01-29T16:24:19.982958174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:19.983952 containerd[1498]: time="2025-01-29T16:24:19.983858800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 16:24:20.019658 systemd[1]: run-containerd-runc-k8s.io-8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87-runc.GFqy2K.mount: Deactivated successfully. Jan 29 16:24:20.030075 systemd[1]: Started cri-containerd-8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87.scope - libcontainer container 8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87. Jan 29 16:24:20.032609 containerd[1498]: time="2025-01-29T16:24:20.032476124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x54mr,Uid:62f55e30-6d37-4cf6-83d1-f987ecffdd26,Namespace:kube-system,Attempt:6,} returns sandbox id \"12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8\"" Jan 29 16:24:20.045850 containerd[1498]: time="2025-01-29T16:24:20.045096179Z" level=info msg="CreateContainer within sandbox \"12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 16:24:20.100657 containerd[1498]: time="2025-01-29T16:24:20.099959452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589789ffcb-dfxmb,Uid:7c160f15-f2fa-4241-9a8e-11ff1238cb71,Namespace:calico-system,Attempt:6,} returns sandbox id \"8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87\"" Jan 29 16:24:20.116770 containerd[1498]: time="2025-01-29T16:24:20.116660709Z" level=info msg="CreateContainer within sandbox \"12eb1f65341a3285b03053ac7da2cd694509ee3b847083d1c6f737364c8393a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6cec4a3097369a284a850949614371bff51c2483737a7c1077128845fb5e6b03\"" Jan 29 16:24:20.119901 containerd[1498]: time="2025-01-29T16:24:20.119443272Z" level=info msg="StartContainer for \"6cec4a3097369a284a850949614371bff51c2483737a7c1077128845fb5e6b03\"" Jan 29 16:24:20.150076 systemd[1]: Started cri-containerd-6cec4a3097369a284a850949614371bff51c2483737a7c1077128845fb5e6b03.scope - libcontainer container 6cec4a3097369a284a850949614371bff51c2483737a7c1077128845fb5e6b03. Jan 29 16:24:20.185475 containerd[1498]: time="2025-01-29T16:24:20.185416075Z" level=info msg="StartContainer for \"6cec4a3097369a284a850949614371bff51c2483737a7c1077128845fb5e6b03\" returns successfully" Jan 29 16:24:20.650794 kubelet[3011]: I0129 16:24:20.650303 3011 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:24:20.714180 kubelet[3011]: I0129 16:24:20.712911 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4b7lh" podStartSLOduration=25.712888332 podStartE2EDuration="25.712888332s" podCreationTimestamp="2025-01-29 16:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:24:20.712206312 +0000 UTC m=+41.921631383" watchObservedRunningTime="2025-01-29 16:24:20.712888332 +0000 UTC m=+41.922313403" Jan 29 16:24:20.825263 systemd-networkd[1393]: cali2b3ad4a2a52: Gained IPv6LL Jan 29 16:24:20.892589 systemd-networkd[1393]: calid9c8ecb5452: Gained IPv6LL Jan 29 16:24:20.937738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753505738.mount: Deactivated successfully. Jan 29 16:24:21.081044 systemd-networkd[1393]: cali486bb08f81f: Gained IPv6LL Jan 29 16:24:21.273740 systemd-networkd[1393]: calib100194e62a: Gained IPv6LL Jan 29 16:24:21.338130 systemd-networkd[1393]: cali50cc6293cb8: Gained IPv6LL Jan 29 16:24:21.531132 systemd-networkd[1393]: cali28b03e2f5c2: Gained IPv6LL Jan 29 16:24:21.796930 kernel: bpftool[5533]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 16:24:22.084210 systemd-networkd[1393]: vxlan.calico: Link UP Jan 29 16:24:22.084219 systemd-networkd[1393]: vxlan.calico: Gained carrier Jan 29 16:24:23.513749 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Jan 29 16:24:23.846907 containerd[1498]: time="2025-01-29T16:24:23.846685741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:23.848873 containerd[1498]: time="2025-01-29T16:24:23.848772245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 29 16:24:23.850868 containerd[1498]: time="2025-01-29T16:24:23.850678622Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:23.854244 containerd[1498]: time="2025-01-29T16:24:23.853876999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:23.854913 containerd[1498]: time="2025-01-29T16:24:23.854776347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.165741579s" Jan 29 16:24:23.854913 containerd[1498]: time="2025-01-29T16:24:23.854837269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 16:24:23.857888 containerd[1498]: time="2025-01-29T16:24:23.857605353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 16:24:23.859932 containerd[1498]: time="2025-01-29T16:24:23.859884342Z" level=info msg="CreateContainer within sandbox \"1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 16:24:23.879543 containerd[1498]: time="2025-01-29T16:24:23.879469896Z" level=info msg="CreateContainer within sandbox \"1de53082ba52db63c0a1b821fd155223218bbce77342165c7b6b8525a2a08695\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a4f90b6b382b99030838bddb0e4c090adfb39bdbdb3fb0f534ea95d5cf78d04c\"" Jan 29 16:24:23.882043 containerd[1498]: time="2025-01-29T16:24:23.881064464Z" level=info msg="StartContainer for \"a4f90b6b382b99030838bddb0e4c090adfb39bdbdb3fb0f534ea95d5cf78d04c\"" Jan 29 16:24:23.923053 systemd[1]: Started cri-containerd-a4f90b6b382b99030838bddb0e4c090adfb39bdbdb3fb0f534ea95d5cf78d04c.scope - libcontainer container a4f90b6b382b99030838bddb0e4c090adfb39bdbdb3fb0f534ea95d5cf78d04c. Jan 29 16:24:23.995096 containerd[1498]: time="2025-01-29T16:24:23.995026281Z" level=info msg="StartContainer for \"a4f90b6b382b99030838bddb0e4c090adfb39bdbdb3fb0f534ea95d5cf78d04c\" returns successfully" Jan 29 16:24:24.277361 containerd[1498]: time="2025-01-29T16:24:24.276307542Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:24.279659 containerd[1498]: time="2025-01-29T16:24:24.279592523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 16:24:24.282234 containerd[1498]: time="2025-01-29T16:24:24.282175361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 424.519807ms" Jan 29 16:24:24.282520 containerd[1498]: time="2025-01-29T16:24:24.282493931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 29 16:24:24.285844 containerd[1498]: time="2025-01-29T16:24:24.285722350Z" level=info msg="CreateContainer within sandbox \"17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 16:24:24.286696 containerd[1498]: time="2025-01-29T16:24:24.286663938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 16:24:24.331492 containerd[1498]: time="2025-01-29T16:24:24.331076294Z" level=info msg="CreateContainer within sandbox \"17c0a2ba05f066dae459bf83f8b53063d8423f0d9596b72050817d416e6c6fce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1bc802033982616a5e7b1a4729142a322dadd20389e2f8c4060fe2f2fba02072\"" Jan 29 16:24:24.333322 containerd[1498]: time="2025-01-29T16:24:24.332914070Z" level=info msg="StartContainer for \"1bc802033982616a5e7b1a4729142a322dadd20389e2f8c4060fe2f2fba02072\"" Jan 29 16:24:24.370703 systemd[1]: Started cri-containerd-1bc802033982616a5e7b1a4729142a322dadd20389e2f8c4060fe2f2fba02072.scope - libcontainer container 1bc802033982616a5e7b1a4729142a322dadd20389e2f8c4060fe2f2fba02072. Jan 29 16:24:24.422778 containerd[1498]: time="2025-01-29T16:24:24.419807481Z" level=info msg="StartContainer for \"1bc802033982616a5e7b1a4729142a322dadd20389e2f8c4060fe2f2fba02072\" returns successfully" Jan 29 16:24:24.659075 kubelet[3011]: I0129 16:24:24.658911 3011 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:24:24.763847 kubelet[3011]: I0129 16:24:24.761934 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x54mr" podStartSLOduration=29.761909759 podStartE2EDuration="29.761909759s" podCreationTimestamp="2025-01-29 16:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 16:24:20.741638708 +0000 UTC m=+41.951063779" watchObservedRunningTime="2025-01-29 16:24:24.761909759 +0000 UTC m=+45.971334830" Jan 29 16:24:24.764854 kubelet[3011]: I0129 16:24:24.764252 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d75c7df-kdpvr" podStartSLOduration=18.348723924 podStartE2EDuration="22.76422351s" podCreationTimestamp="2025-01-29 16:24:02 +0000 UTC" firstStartedPulling="2025-01-29 16:24:19.868057338 +0000 UTC m=+41.077482409" lastFinishedPulling="2025-01-29 16:24:24.283556964 +0000 UTC m=+45.492981995" observedRunningTime="2025-01-29 16:24:24.760781725 +0000 UTC m=+45.970206796" watchObservedRunningTime="2025-01-29 16:24:24.76422351 +0000 UTC m=+45.973648701" Jan 29 16:24:24.786246 kubelet[3011]: I0129 16:24:24.786069 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d75c7df-94j9d" podStartSLOduration=18.617147542 podStartE2EDuration="22.786046656s" podCreationTimestamp="2025-01-29 16:24:02 +0000 UTC" firstStartedPulling="2025-01-29 16:24:19.687602925 +0000 UTC m=+40.897027996" lastFinishedPulling="2025-01-29 16:24:23.856502039 +0000 UTC m=+45.065927110" observedRunningTime="2025-01-29 16:24:24.785799488 +0000 UTC m=+45.995224559" watchObservedRunningTime="2025-01-29 16:24:24.786046656 +0000 UTC m=+45.995471727" Jan 29 16:24:25.772803 kubelet[3011]: I0129 16:24:25.772740 3011 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:24:25.773936 kubelet[3011]: I0129 16:24:25.773563 3011 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:24:25.791370 containerd[1498]: time="2025-01-29T16:24:25.791112940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:25.794666 containerd[1498]: time="2025-01-29T16:24:25.793783302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 16:24:25.797138 containerd[1498]: time="2025-01-29T16:24:25.796822555Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:25.802839 containerd[1498]: time="2025-01-29T16:24:25.802746977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:25.804466 containerd[1498]: time="2025-01-29T16:24:25.803729007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.516136601s" Jan 29 16:24:25.804466 containerd[1498]: time="2025-01-29T16:24:25.803790929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 16:24:25.806666 containerd[1498]: time="2025-01-29T16:24:25.806154842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 16:24:25.808025 containerd[1498]: time="2025-01-29T16:24:25.807991018Z" level=info msg="CreateContainer within sandbox \"97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 16:24:25.842500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount18266025.mount: Deactivated successfully. Jan 29 16:24:25.849151 containerd[1498]: time="2025-01-29T16:24:25.849101600Z" level=info msg="CreateContainer within sandbox \"97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e2ec59f6c7ce94952c9d58885f70085598d126ac9743d143f2e989be2076b442\"" Jan 29 16:24:25.854271 containerd[1498]: time="2025-01-29T16:24:25.850120191Z" level=info msg="StartContainer for \"e2ec59f6c7ce94952c9d58885f70085598d126ac9743d143f2e989be2076b442\"" Jan 29 16:24:25.904361 systemd[1]: Started cri-containerd-e2ec59f6c7ce94952c9d58885f70085598d126ac9743d143f2e989be2076b442.scope - libcontainer container e2ec59f6c7ce94952c9d58885f70085598d126ac9743d143f2e989be2076b442. Jan 29 16:24:25.955090 containerd[1498]: time="2025-01-29T16:24:25.953711770Z" level=info msg="StartContainer for \"e2ec59f6c7ce94952c9d58885f70085598d126ac9743d143f2e989be2076b442\" returns successfully" Jan 29 16:24:29.331295 systemd[1]: Started sshd@37-167.235.198.80:22-149.50.252.131:59052.service - OpenSSH per-connection server daemon (149.50.252.131:59052). Jan 29 16:24:29.406350 systemd[1]: Started sshd@38-167.235.198.80:22-149.50.252.131:59064.service - OpenSSH per-connection server daemon (149.50.252.131:59064). Jan 29 16:24:29.526889 sshd[5813]: Connection closed by 149.50.252.131 port 59052 [preauth] Jan 29 16:24:29.530553 systemd[1]: sshd@37-167.235.198.80:22-149.50.252.131:59052.service: Deactivated successfully. Jan 29 16:24:29.609581 sshd[5816]: Connection closed by 149.50.252.131 port 59064 [preauth] Jan 29 16:24:29.614290 systemd[1]: sshd@38-167.235.198.80:22-149.50.252.131:59064.service: Deactivated successfully. Jan 29 16:24:30.715694 containerd[1498]: time="2025-01-29T16:24:30.714938871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:30.718107 containerd[1498]: time="2025-01-29T16:24:30.718040369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 29 16:24:30.719948 containerd[1498]: time="2025-01-29T16:24:30.719885827Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:30.725737 containerd[1498]: time="2025-01-29T16:24:30.724547094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:30.725737 containerd[1498]: time="2025-01-29T16:24:30.725363239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 4.919145076s" Jan 29 16:24:30.725737 containerd[1498]: time="2025-01-29T16:24:30.725396201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 29 16:24:30.727518 containerd[1498]: time="2025-01-29T16:24:30.727477626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 16:24:30.763151 containerd[1498]: time="2025-01-29T16:24:30.763108468Z" level=info msg="CreateContainer within sandbox \"8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 16:24:30.806751 containerd[1498]: time="2025-01-29T16:24:30.806695440Z" level=info msg="CreateContainer within sandbox \"8421a14828789ad60de833847d5c2f90c8bce109b24ea6a914d83b64b513fa87\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"77c2618b2621200a4b3cfba4b8df37ddc13a7c5b90d865fb6e450d0d2101ad9f\"" Jan 29 16:24:30.809314 containerd[1498]: time="2025-01-29T16:24:30.809204199Z" level=info msg="StartContainer for \"77c2618b2621200a4b3cfba4b8df37ddc13a7c5b90d865fb6e450d0d2101ad9f\"" Jan 29 16:24:30.857044 systemd[1]: Started cri-containerd-77c2618b2621200a4b3cfba4b8df37ddc13a7c5b90d865fb6e450d0d2101ad9f.scope - libcontainer container 77c2618b2621200a4b3cfba4b8df37ddc13a7c5b90d865fb6e450d0d2101ad9f. Jan 29 16:24:30.913609 containerd[1498]: time="2025-01-29T16:24:30.913258194Z" level=info msg="StartContainer for \"77c2618b2621200a4b3cfba4b8df37ddc13a7c5b90d865fb6e450d0d2101ad9f\" returns successfully" Jan 29 16:24:31.840606 kubelet[3011]: I0129 16:24:31.840330 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-589789ffcb-dfxmb" podStartSLOduration=18.215981938 podStartE2EDuration="28.840311216s" podCreationTimestamp="2025-01-29 16:24:03 +0000 UTC" firstStartedPulling="2025-01-29 16:24:20.102307842 +0000 UTC m=+41.311732913" lastFinishedPulling="2025-01-29 16:24:30.72663712 +0000 UTC m=+51.936062191" observedRunningTime="2025-01-29 16:24:31.837549128 +0000 UTC m=+53.046974199" watchObservedRunningTime="2025-01-29 16:24:31.840311216 +0000 UTC m=+53.049736247" Jan 29 16:24:31.850610 systemd[1]: run-containerd-runc-k8s.io-77c2618b2621200a4b3cfba4b8df37ddc13a7c5b90d865fb6e450d0d2101ad9f-runc.yFrZ4B.mount: Deactivated successfully. Jan 29 16:24:32.372264 systemd[1]: Started sshd@39-167.235.198.80:22-134.122.8.241:53942.service - OpenSSH per-connection server daemon (134.122.8.241:53942). Jan 29 16:24:32.543660 containerd[1498]: time="2025-01-29T16:24:32.543488967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:32.546104 containerd[1498]: time="2025-01-29T16:24:32.545979126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 16:24:32.546838 containerd[1498]: time="2025-01-29T16:24:32.546300136Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:32.552764 containerd[1498]: time="2025-01-29T16:24:32.552685939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 16:24:32.556706 containerd[1498]: time="2025-01-29T16:24:32.556603623Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.828912751s" Jan 29 16:24:32.557033 containerd[1498]: time="2025-01-29T16:24:32.556978875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 16:24:32.567805 containerd[1498]: time="2025-01-29T16:24:32.567092556Z" level=info msg="CreateContainer within sandbox \"97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 16:24:32.600726 containerd[1498]: time="2025-01-29T16:24:32.600660902Z" level=info msg="CreateContainer within sandbox \"97246bd8f4510bb8f13b9c54a9ba7505e6ac4fdfe17db56e97bfc0d25733c64e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d1e525904638e045b4abb6fa1db613d40feec9f019cc4aab5e1278c2ad68d01f\"" Jan 29 16:24:32.605316 containerd[1498]: time="2025-01-29T16:24:32.605213727Z" level=info msg="StartContainer for \"d1e525904638e045b4abb6fa1db613d40feec9f019cc4aab5e1278c2ad68d01f\"" Jan 29 16:24:32.652753 systemd[1]: Started cri-containerd-d1e525904638e045b4abb6fa1db613d40feec9f019cc4aab5e1278c2ad68d01f.scope - libcontainer container d1e525904638e045b4abb6fa1db613d40feec9f019cc4aab5e1278c2ad68d01f. Jan 29 16:24:32.693796 containerd[1498]: time="2025-01-29T16:24:32.693659576Z" level=info msg="StartContainer for \"d1e525904638e045b4abb6fa1db613d40feec9f019cc4aab5e1278c2ad68d01f\" returns successfully" Jan 29 16:24:32.835851 kubelet[3011]: I0129 16:24:32.834404 3011 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z22bk" podStartSLOduration=17.182922768 podStartE2EDuration="29.834382286s" podCreationTimestamp="2025-01-29 16:24:03 +0000 UTC" firstStartedPulling="2025-01-29 16:24:19.909559445 +0000 UTC m=+41.118984516" lastFinishedPulling="2025-01-29 16:24:32.561018963 +0000 UTC m=+53.770444034" observedRunningTime="2025-01-29 16:24:32.834111837 +0000 UTC m=+54.043536908" watchObservedRunningTime="2025-01-29 16:24:32.834382286 +0000 UTC m=+54.043807397" Jan 29 16:24:32.943306 sshd[5893]: Invalid user superman from 134.122.8.241 port 53942 Jan 29 16:24:33.039351 sshd[5893]: Received disconnect from 134.122.8.241 port 53942:11: Bye Bye [preauth] Jan 29 16:24:33.039351 sshd[5893]: Disconnected from invalid user superman 134.122.8.241 port 53942 [preauth] Jan 29 16:24:33.042571 systemd[1]: sshd@39-167.235.198.80:22-134.122.8.241:53942.service: Deactivated successfully. Jan 29 16:24:33.071149 kubelet[3011]: I0129 16:24:33.070679 3011 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 16:24:33.071149 kubelet[3011]: I0129 16:24:33.070745 3011 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 16:24:38.907183 containerd[1498]: time="2025-01-29T16:24:38.907112578Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\"" Jan 29 16:24:38.907956 containerd[1498]: time="2025-01-29T16:24:38.907708837Z" level=info msg="TearDown network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" successfully" Jan 29 16:24:38.907956 containerd[1498]: time="2025-01-29T16:24:38.907732158Z" level=info msg="StopPodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" returns successfully" Jan 29 16:24:38.908951 containerd[1498]: time="2025-01-29T16:24:38.908758471Z" level=info msg="RemovePodSandbox for \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\"" Jan 29 16:24:38.908951 containerd[1498]: time="2025-01-29T16:24:38.908802193Z" level=info msg="Forcibly stopping sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\"" Jan 29 16:24:38.910099 containerd[1498]: time="2025-01-29T16:24:38.909162604Z" level=info msg="TearDown network for sandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" successfully" Jan 29 16:24:38.913865 containerd[1498]: time="2025-01-29T16:24:38.913768394Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.914377 containerd[1498]: time="2025-01-29T16:24:38.914167407Z" level=info msg="RemovePodSandbox \"c5be2cba12d327fbf6786c330f2ead0f15c315590435dbc5cc66ca6f7aa3a892\" returns successfully" Jan 29 16:24:38.915217 containerd[1498]: time="2025-01-29T16:24:38.915188600Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\"" Jan 29 16:24:38.915317 containerd[1498]: time="2025-01-29T16:24:38.915300724Z" level=info msg="TearDown network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" successfully" Jan 29 16:24:38.915354 containerd[1498]: time="2025-01-29T16:24:38.915313884Z" level=info msg="StopPodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" returns successfully" Jan 29 16:24:38.916532 containerd[1498]: time="2025-01-29T16:24:38.915650375Z" level=info msg="RemovePodSandbox for \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\"" Jan 29 16:24:38.916532 containerd[1498]: time="2025-01-29T16:24:38.915677016Z" level=info msg="Forcibly stopping sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\"" Jan 29 16:24:38.916532 containerd[1498]: time="2025-01-29T16:24:38.915741818Z" level=info msg="TearDown network for sandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" successfully" Jan 29 16:24:38.922593 containerd[1498]: time="2025-01-29T16:24:38.922536199Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.922747 containerd[1498]: time="2025-01-29T16:24:38.922619122Z" level=info msg="RemovePodSandbox \"681fb197199ca0450d3190ca09d442f3b3c538c91a93295eb7fd142c12652231\" returns successfully" Jan 29 16:24:38.923482 containerd[1498]: time="2025-01-29T16:24:38.923436428Z" level=info msg="StopPodSandbox for \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\"" Jan 29 16:24:38.923682 containerd[1498]: time="2025-01-29T16:24:38.923635275Z" level=info msg="TearDown network for sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" successfully" Jan 29 16:24:38.923682 containerd[1498]: time="2025-01-29T16:24:38.923654835Z" level=info msg="StopPodSandbox for \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" returns successfully" Jan 29 16:24:38.924286 containerd[1498]: time="2025-01-29T16:24:38.924084329Z" level=info msg="RemovePodSandbox for \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\"" Jan 29 16:24:38.924286 containerd[1498]: time="2025-01-29T16:24:38.924126691Z" level=info msg="Forcibly stopping sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\"" Jan 29 16:24:38.924286 containerd[1498]: time="2025-01-29T16:24:38.924208493Z" level=info msg="TearDown network for sandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" successfully" Jan 29 16:24:38.928982 containerd[1498]: time="2025-01-29T16:24:38.928484593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.928982 containerd[1498]: time="2025-01-29T16:24:38.928626397Z" level=info msg="RemovePodSandbox \"85c681f993a1c78a4b18ccae24badc2f7e1aaee4af1aac57f2e1ec95675228e4\" returns successfully" Jan 29 16:24:38.929708 containerd[1498]: time="2025-01-29T16:24:38.929664471Z" level=info msg="StopPodSandbox for \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\"" Jan 29 16:24:38.930186 containerd[1498]: time="2025-01-29T16:24:38.929791275Z" level=info msg="TearDown network for sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\" successfully" Jan 29 16:24:38.930186 containerd[1498]: time="2025-01-29T16:24:38.929802275Z" level=info msg="StopPodSandbox for \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\" returns successfully" Jan 29 16:24:38.932869 containerd[1498]: time="2025-01-29T16:24:38.931500211Z" level=info msg="RemovePodSandbox for \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\"" Jan 29 16:24:38.932869 containerd[1498]: time="2025-01-29T16:24:38.931569053Z" level=info msg="Forcibly stopping sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\"" Jan 29 16:24:38.932869 containerd[1498]: time="2025-01-29T16:24:38.931706097Z" level=info msg="TearDown network for sandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\" successfully" Jan 29 16:24:38.937123 containerd[1498]: time="2025-01-29T16:24:38.936763502Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.937358 containerd[1498]: time="2025-01-29T16:24:38.937153234Z" level=info msg="RemovePodSandbox \"057c7c21a74af99294f2db97749a419f71f1186b73dc36f857fbe456d5b5b9ea\" returns successfully" Jan 29 16:24:38.937933 containerd[1498]: time="2025-01-29T16:24:38.937601289Z" level=info msg="StopPodSandbox for \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\"" Jan 29 16:24:38.937933 containerd[1498]: time="2025-01-29T16:24:38.937756014Z" level=info msg="TearDown network for sandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\" successfully" Jan 29 16:24:38.937933 containerd[1498]: time="2025-01-29T16:24:38.937769774Z" level=info msg="StopPodSandbox for \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\" returns successfully" Jan 29 16:24:38.938199 containerd[1498]: time="2025-01-29T16:24:38.938133506Z" level=info msg="RemovePodSandbox for \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\"" Jan 29 16:24:38.938199 containerd[1498]: time="2025-01-29T16:24:38.938175228Z" level=info msg="Forcibly stopping sandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\"" Jan 29 16:24:38.938338 containerd[1498]: time="2025-01-29T16:24:38.938317152Z" level=info msg="TearDown network for sandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\" successfully" Jan 29 16:24:38.942509 containerd[1498]: time="2025-01-29T16:24:38.942449727Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.942855 containerd[1498]: time="2025-01-29T16:24:38.942544330Z" level=info msg="RemovePodSandbox \"20688a6c3c494401892f467f899bf88da0ff231e461b830d536826468e753513\" returns successfully" Jan 29 16:24:38.943368 containerd[1498]: time="2025-01-29T16:24:38.943091587Z" level=info msg="StopPodSandbox for \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\"" Jan 29 16:24:38.943368 containerd[1498]: time="2025-01-29T16:24:38.943214431Z" level=info msg="TearDown network for sandbox \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\" successfully" Jan 29 16:24:38.943368 containerd[1498]: time="2025-01-29T16:24:38.943226512Z" level=info msg="StopPodSandbox for \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\" returns successfully" Jan 29 16:24:38.944049 containerd[1498]: time="2025-01-29T16:24:38.943990737Z" level=info msg="RemovePodSandbox for \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\"" Jan 29 16:24:38.944157 containerd[1498]: time="2025-01-29T16:24:38.944055579Z" level=info msg="Forcibly stopping sandbox \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\"" Jan 29 16:24:38.944200 containerd[1498]: time="2025-01-29T16:24:38.944178983Z" level=info msg="TearDown network for sandbox \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\" successfully" Jan 29 16:24:38.949284 containerd[1498]: time="2025-01-29T16:24:38.948662088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.949284 containerd[1498]: time="2025-01-29T16:24:38.949152984Z" level=info msg="RemovePodSandbox \"3fd9c34aac8aa3abc5495ca2a79c47fa0ede210e0a65841ad0d23cbdbc05d1de\" returns successfully" Jan 29 16:24:38.951671 containerd[1498]: time="2025-01-29T16:24:38.950686474Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\"" Jan 29 16:24:38.951671 containerd[1498]: time="2025-01-29T16:24:38.950864120Z" level=info msg="TearDown network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" successfully" Jan 29 16:24:38.951671 containerd[1498]: time="2025-01-29T16:24:38.950883601Z" level=info msg="StopPodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" returns successfully" Jan 29 16:24:38.952829 containerd[1498]: time="2025-01-29T16:24:38.952613657Z" level=info msg="RemovePodSandbox for \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\"" Jan 29 16:24:38.952829 containerd[1498]: time="2025-01-29T16:24:38.952658058Z" level=info msg="Forcibly stopping sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\"" Jan 29 16:24:38.952829 containerd[1498]: time="2025-01-29T16:24:38.952750141Z" level=info msg="TearDown network for sandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" successfully" Jan 29 16:24:38.958629 containerd[1498]: time="2025-01-29T16:24:38.958201879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.958629 containerd[1498]: time="2025-01-29T16:24:38.958289321Z" level=info msg="RemovePodSandbox \"603ddf77060b2c3270c0d5556e93f98cf5885a91a2ab7a22649733257aa8c706\" returns successfully" Jan 29 16:24:38.959558 containerd[1498]: time="2025-01-29T16:24:38.959063627Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\"" Jan 29 16:24:38.959558 containerd[1498]: time="2025-01-29T16:24:38.959188391Z" level=info msg="TearDown network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" successfully" Jan 29 16:24:38.959558 containerd[1498]: time="2025-01-29T16:24:38.959200311Z" level=info msg="StopPodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" returns successfully" Jan 29 16:24:38.960867 containerd[1498]: time="2025-01-29T16:24:38.960077860Z" level=info msg="RemovePodSandbox for \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\"" Jan 29 16:24:38.960867 containerd[1498]: time="2025-01-29T16:24:38.960184783Z" level=info msg="Forcibly stopping sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\"" Jan 29 16:24:38.960867 containerd[1498]: time="2025-01-29T16:24:38.960269426Z" level=info msg="TearDown network for sandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" successfully" Jan 29 16:24:38.969420 containerd[1498]: time="2025-01-29T16:24:38.969344081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.969602 containerd[1498]: time="2025-01-29T16:24:38.969443924Z" level=info msg="RemovePodSandbox \"5ad5e9304f915b6453656e6e9943d4d9a0d3b4725f51d1211e537ba34e8213a1\" returns successfully" Jan 29 16:24:38.970644 containerd[1498]: time="2025-01-29T16:24:38.970301152Z" level=info msg="StopPodSandbox for \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\"" Jan 29 16:24:38.970644 containerd[1498]: time="2025-01-29T16:24:38.970426476Z" level=info msg="TearDown network for sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" successfully" Jan 29 16:24:38.970644 containerd[1498]: time="2025-01-29T16:24:38.970438036Z" level=info msg="StopPodSandbox for \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" returns successfully" Jan 29 16:24:38.971298 containerd[1498]: time="2025-01-29T16:24:38.971232382Z" level=info msg="RemovePodSandbox for \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\"" Jan 29 16:24:38.971298 containerd[1498]: time="2025-01-29T16:24:38.971270943Z" level=info msg="Forcibly stopping sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\"" Jan 29 16:24:38.971456 containerd[1498]: time="2025-01-29T16:24:38.971357626Z" level=info msg="TearDown network for sandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" successfully" Jan 29 16:24:38.976228 containerd[1498]: time="2025-01-29T16:24:38.976124381Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.976228 containerd[1498]: time="2025-01-29T16:24:38.976232025Z" level=info msg="RemovePodSandbox \"55c684afe5b3c77dfcab0cd683c9aacd8d578407e78e410600a0fa0a12a032a3\" returns successfully" Jan 29 16:24:38.977462 containerd[1498]: time="2025-01-29T16:24:38.977081532Z" level=info msg="StopPodSandbox for \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\"" Jan 29 16:24:38.977462 containerd[1498]: time="2025-01-29T16:24:38.977218377Z" level=info msg="TearDown network for sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\" successfully" Jan 29 16:24:38.977462 containerd[1498]: time="2025-01-29T16:24:38.977230857Z" level=info msg="StopPodSandbox for \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\" returns successfully" Jan 29 16:24:38.978324 containerd[1498]: time="2025-01-29T16:24:38.978064444Z" level=info msg="RemovePodSandbox for \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\"" Jan 29 16:24:38.978324 containerd[1498]: time="2025-01-29T16:24:38.978102846Z" level=info msg="Forcibly stopping sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\"" Jan 29 16:24:38.978324 containerd[1498]: time="2025-01-29T16:24:38.978199609Z" level=info msg="TearDown network for sandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\" successfully" Jan 29 16:24:38.983577 containerd[1498]: time="2025-01-29T16:24:38.983337576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.983577 containerd[1498]: time="2025-01-29T16:24:38.983419658Z" level=info msg="RemovePodSandbox \"fe4758bc6d480dad4cbbe2a8fb974e2e2952ea5d9a64cee44d0a4d437dcdd6be\" returns successfully" Jan 29 16:24:38.985264 containerd[1498]: time="2025-01-29T16:24:38.984465932Z" level=info msg="StopPodSandbox for \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\"" Jan 29 16:24:38.985264 containerd[1498]: time="2025-01-29T16:24:38.984672139Z" level=info msg="TearDown network for sandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\" successfully" Jan 29 16:24:38.985264 containerd[1498]: time="2025-01-29T16:24:38.984684300Z" level=info msg="StopPodSandbox for \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\" returns successfully" Jan 29 16:24:38.985264 containerd[1498]: time="2025-01-29T16:24:38.985018150Z" level=info msg="RemovePodSandbox for \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\"" Jan 29 16:24:38.985264 containerd[1498]: time="2025-01-29T16:24:38.985044191Z" level=info msg="Forcibly stopping sandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\"" Jan 29 16:24:38.985264 containerd[1498]: time="2025-01-29T16:24:38.985117994Z" level=info msg="TearDown network for sandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\" successfully" Jan 29 16:24:38.989872 containerd[1498]: time="2025-01-29T16:24:38.989773065Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.990559 containerd[1498]: time="2025-01-29T16:24:38.990122596Z" level=info msg="RemovePodSandbox \"c5a6a13f2074774ba049f5ecac40d39cd0b47ecddd11b5ef5e0831158d85bbe0\" returns successfully" Jan 29 16:24:38.990901 containerd[1498]: time="2025-01-29T16:24:38.990776218Z" level=info msg="StopPodSandbox for \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\"" Jan 29 16:24:38.990997 containerd[1498]: time="2025-01-29T16:24:38.990967144Z" level=info msg="TearDown network for sandbox \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\" successfully" Jan 29 16:24:38.990997 containerd[1498]: time="2025-01-29T16:24:38.990981744Z" level=info msg="StopPodSandbox for \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\" returns successfully" Jan 29 16:24:38.992411 containerd[1498]: time="2025-01-29T16:24:38.992372429Z" level=info msg="RemovePodSandbox for \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\"" Jan 29 16:24:38.992411 containerd[1498]: time="2025-01-29T16:24:38.992417991Z" level=info msg="Forcibly stopping sandbox \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\"" Jan 29 16:24:38.992587 containerd[1498]: time="2025-01-29T16:24:38.992516514Z" level=info msg="TearDown network for sandbox \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\" successfully" Jan 29 16:24:38.996604 containerd[1498]: time="2025-01-29T16:24:38.996232795Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:38.996604 containerd[1498]: time="2025-01-29T16:24:38.996327558Z" level=info msg="RemovePodSandbox \"3bd681717d89ffb35e0b1a0bf56724acd810096a037aee928144676b55c2e691\" returns successfully" Jan 29 16:24:38.999909 containerd[1498]: time="2025-01-29T16:24:38.999389578Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\"" Jan 29 16:24:38.999909 containerd[1498]: time="2025-01-29T16:24:38.999583664Z" level=info msg="TearDown network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" successfully" Jan 29 16:24:38.999909 containerd[1498]: time="2025-01-29T16:24:38.999597584Z" level=info msg="StopPodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" returns successfully" Jan 29 16:24:39.000158 containerd[1498]: time="2025-01-29T16:24:39.000092320Z" level=info msg="RemovePodSandbox for \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\"" Jan 29 16:24:39.000158 containerd[1498]: time="2025-01-29T16:24:39.000140522Z" level=info msg="Forcibly stopping sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\"" Jan 29 16:24:39.000360 containerd[1498]: time="2025-01-29T16:24:39.000283327Z" level=info msg="TearDown network for sandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" successfully" Jan 29 16:24:39.008449 containerd[1498]: time="2025-01-29T16:24:39.007572124Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.008449 containerd[1498]: time="2025-01-29T16:24:39.007669768Z" level=info msg="RemovePodSandbox \"952b7b4fe72f37b66ba63da224ec92f653716fd5c3b56a2a0cd4de25133ddc53\" returns successfully" Jan 29 16:24:39.010605 containerd[1498]: time="2025-01-29T16:24:39.010446818Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\"" Jan 29 16:24:39.010605 containerd[1498]: time="2025-01-29T16:24:39.010588183Z" level=info msg="TearDown network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" successfully" Jan 29 16:24:39.010605 containerd[1498]: time="2025-01-29T16:24:39.010599943Z" level=info msg="StopPodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" returns successfully" Jan 29 16:24:39.011294 containerd[1498]: time="2025-01-29T16:24:39.011254444Z" level=info msg="RemovePodSandbox for \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\"" Jan 29 16:24:39.011333 containerd[1498]: time="2025-01-29T16:24:39.011298366Z" level=info msg="Forcibly stopping sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\"" Jan 29 16:24:39.011636 containerd[1498]: time="2025-01-29T16:24:39.011381049Z" level=info msg="TearDown network for sandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" successfully" Jan 29 16:24:39.015619 containerd[1498]: time="2025-01-29T16:24:39.015386099Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.015619 containerd[1498]: time="2025-01-29T16:24:39.015508023Z" level=info msg="RemovePodSandbox \"ba3be5bf521f75635518295b226049e197e1cfe7e12f1e47b527600d72b670fd\" returns successfully" Jan 29 16:24:39.016957 containerd[1498]: time="2025-01-29T16:24:39.016319530Z" level=info msg="StopPodSandbox for \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\"" Jan 29 16:24:39.016957 containerd[1498]: time="2025-01-29T16:24:39.016515536Z" level=info msg="TearDown network for sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" successfully" Jan 29 16:24:39.016957 containerd[1498]: time="2025-01-29T16:24:39.016528817Z" level=info msg="StopPodSandbox for \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" returns successfully" Jan 29 16:24:39.017989 containerd[1498]: time="2025-01-29T16:24:39.017712255Z" level=info msg="RemovePodSandbox for \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\"" Jan 29 16:24:39.017989 containerd[1498]: time="2025-01-29T16:24:39.017754656Z" level=info msg="Forcibly stopping sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\"" Jan 29 16:24:39.017989 containerd[1498]: time="2025-01-29T16:24:39.017870860Z" level=info msg="TearDown network for sandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" successfully" Jan 29 16:24:39.021494 containerd[1498]: time="2025-01-29T16:24:39.021414696Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.021657 containerd[1498]: time="2025-01-29T16:24:39.021518659Z" level=info msg="RemovePodSandbox \"ddadd9ab0869b691fba0ca1489b5ad024e1fa51c623d053e098aab79514e149e\" returns successfully" Jan 29 16:24:39.022695 containerd[1498]: time="2025-01-29T16:24:39.022261484Z" level=info msg="StopPodSandbox for \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\"" Jan 29 16:24:39.022695 containerd[1498]: time="2025-01-29T16:24:39.022401008Z" level=info msg="TearDown network for sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\" successfully" Jan 29 16:24:39.022695 containerd[1498]: time="2025-01-29T16:24:39.022413928Z" level=info msg="StopPodSandbox for \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\" returns successfully" Jan 29 16:24:39.023417 containerd[1498]: time="2025-01-29T16:24:39.023322358Z" level=info msg="RemovePodSandbox for \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\"" Jan 29 16:24:39.023417 containerd[1498]: time="2025-01-29T16:24:39.023373600Z" level=info msg="Forcibly stopping sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\"" Jan 29 16:24:39.027674 containerd[1498]: time="2025-01-29T16:24:39.027586297Z" level=info msg="TearDown network for sandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\" successfully" Jan 29 16:24:39.032478 containerd[1498]: time="2025-01-29T16:24:39.032195808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.032478 containerd[1498]: time="2025-01-29T16:24:39.032265090Z" level=info msg="RemovePodSandbox \"37e3833c0be3eb6156c5e04ce25964f0b3480c0ee2ffc585eee748de3825ffd3\" returns successfully" Jan 29 16:24:39.032758 containerd[1498]: time="2025-01-29T16:24:39.032565620Z" level=info msg="StopPodSandbox for \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\"" Jan 29 16:24:39.032758 containerd[1498]: time="2025-01-29T16:24:39.032661063Z" level=info msg="TearDown network for sandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\" successfully" Jan 29 16:24:39.032758 containerd[1498]: time="2025-01-29T16:24:39.032671583Z" level=info msg="StopPodSandbox for \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\" returns successfully" Jan 29 16:24:39.033519 containerd[1498]: time="2025-01-29T16:24:39.033494050Z" level=info msg="RemovePodSandbox for \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\"" Jan 29 16:24:39.033519 containerd[1498]: time="2025-01-29T16:24:39.033524451Z" level=info msg="Forcibly stopping sandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\"" Jan 29 16:24:39.033640 containerd[1498]: time="2025-01-29T16:24:39.033598013Z" level=info msg="TearDown network for sandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\" successfully" Jan 29 16:24:39.037389 containerd[1498]: time="2025-01-29T16:24:39.037272213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.037389 containerd[1498]: time="2025-01-29T16:24:39.037360776Z" level=info msg="RemovePodSandbox \"1658e70b42f79a5ec90633635e6c7caaebe0faa58642ee317ec7e93e8ee99bb6\" returns successfully" Jan 29 16:24:39.039115 containerd[1498]: time="2025-01-29T16:24:39.038652498Z" level=info msg="StopPodSandbox for \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\"" Jan 29 16:24:39.039115 containerd[1498]: time="2025-01-29T16:24:39.038774582Z" level=info msg="TearDown network for sandbox \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\" successfully" Jan 29 16:24:39.039115 containerd[1498]: time="2025-01-29T16:24:39.038786503Z" level=info msg="StopPodSandbox for \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\" returns successfully" Jan 29 16:24:39.041467 containerd[1498]: time="2025-01-29T16:24:39.040196909Z" level=info msg="RemovePodSandbox for \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\"" Jan 29 16:24:39.041467 containerd[1498]: time="2025-01-29T16:24:39.040234470Z" level=info msg="Forcibly stopping sandbox \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\"" Jan 29 16:24:39.041467 containerd[1498]: time="2025-01-29T16:24:39.040316112Z" level=info msg="TearDown network for sandbox \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\" successfully" Jan 29 16:24:39.046797 containerd[1498]: time="2025-01-29T16:24:39.046717521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.046971 containerd[1498]: time="2025-01-29T16:24:39.046818285Z" level=info msg="RemovePodSandbox \"3e1f86402fe4f821abb32cfa41a150ad7cfbcb42a276b0ccb2efe6770a52cb05\" returns successfully" Jan 29 16:24:39.047788 containerd[1498]: time="2025-01-29T16:24:39.047531228Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\"" Jan 29 16:24:39.047788 containerd[1498]: time="2025-01-29T16:24:39.047659752Z" level=info msg="TearDown network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" successfully" Jan 29 16:24:39.047788 containerd[1498]: time="2025-01-29T16:24:39.047672432Z" level=info msg="StopPodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" returns successfully" Jan 29 16:24:39.049457 containerd[1498]: time="2025-01-29T16:24:39.048083326Z" level=info msg="RemovePodSandbox for \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\"" Jan 29 16:24:39.049457 containerd[1498]: time="2025-01-29T16:24:39.048114647Z" level=info msg="Forcibly stopping sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\"" Jan 29 16:24:39.049457 containerd[1498]: time="2025-01-29T16:24:39.048181609Z" level=info msg="TearDown network for sandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" successfully" Jan 29 16:24:39.052978 containerd[1498]: time="2025-01-29T16:24:39.052914843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.053201 containerd[1498]: time="2025-01-29T16:24:39.053045808Z" level=info msg="RemovePodSandbox \"1c3192d0d2a0dacb0133f726716409d829cd1c6977696c4671598ab80ccedf2e\" returns successfully" Jan 29 16:24:39.054717 containerd[1498]: time="2025-01-29T16:24:39.053596986Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\"" Jan 29 16:24:39.054717 containerd[1498]: time="2025-01-29T16:24:39.053752551Z" level=info msg="TearDown network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" successfully" Jan 29 16:24:39.054717 containerd[1498]: time="2025-01-29T16:24:39.053775352Z" level=info msg="StopPodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" returns successfully" Jan 29 16:24:39.054717 containerd[1498]: time="2025-01-29T16:24:39.054579898Z" level=info msg="RemovePodSandbox for \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\"" Jan 29 16:24:39.054717 containerd[1498]: time="2025-01-29T16:24:39.054659580Z" level=info msg="Forcibly stopping sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\"" Jan 29 16:24:39.055233 containerd[1498]: time="2025-01-29T16:24:39.054775664Z" level=info msg="TearDown network for sandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" successfully" Jan 29 16:24:39.060878 containerd[1498]: time="2025-01-29T16:24:39.060616255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.060878 containerd[1498]: time="2025-01-29T16:24:39.060758899Z" level=info msg="RemovePodSandbox \"0c5b0337d18148833002eb813b11ac1fe97308bd01d3877503553d7873498c19\" returns successfully" Jan 29 16:24:39.062613 containerd[1498]: time="2025-01-29T16:24:39.061774332Z" level=info msg="StopPodSandbox for \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\"" Jan 29 16:24:39.062613 containerd[1498]: time="2025-01-29T16:24:39.061891936Z" level=info msg="TearDown network for sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" successfully" Jan 29 16:24:39.062613 containerd[1498]: time="2025-01-29T16:24:39.061902217Z" level=info msg="StopPodSandbox for \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" returns successfully" Jan 29 16:24:39.062613 containerd[1498]: time="2025-01-29T16:24:39.062576559Z" level=info msg="RemovePodSandbox for \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\"" Jan 29 16:24:39.062613 containerd[1498]: time="2025-01-29T16:24:39.062604080Z" level=info msg="Forcibly stopping sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\"" Jan 29 16:24:39.062843 containerd[1498]: time="2025-01-29T16:24:39.062682122Z" level=info msg="TearDown network for sandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" successfully" Jan 29 16:24:39.067343 containerd[1498]: time="2025-01-29T16:24:39.067272832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.067500 containerd[1498]: time="2025-01-29T16:24:39.067358795Z" level=info msg="RemovePodSandbox \"e7a93958b4cb35963484d76983469393e6031d1de04ff7e47c054702587383c5\" returns successfully" Jan 29 16:24:39.068888 containerd[1498]: time="2025-01-29T16:24:39.068852803Z" level=info msg="StopPodSandbox for \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\"" Jan 29 16:24:39.068993 containerd[1498]: time="2025-01-29T16:24:39.068972487Z" level=info msg="TearDown network for sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\" successfully" Jan 29 16:24:39.068993 containerd[1498]: time="2025-01-29T16:24:39.068984488Z" level=info msg="StopPodSandbox for \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\" returns successfully" Jan 29 16:24:39.070172 containerd[1498]: time="2025-01-29T16:24:39.070130605Z" level=info msg="RemovePodSandbox for \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\"" Jan 29 16:24:39.070234 containerd[1498]: time="2025-01-29T16:24:39.070181727Z" level=info msg="Forcibly stopping sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\"" Jan 29 16:24:39.070294 containerd[1498]: time="2025-01-29T16:24:39.070277530Z" level=info msg="TearDown network for sandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\" successfully" Jan 29 16:24:39.077750 containerd[1498]: time="2025-01-29T16:24:39.077664131Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.078876 containerd[1498]: time="2025-01-29T16:24:39.077768934Z" level=info msg="RemovePodSandbox \"0ad46fd23c442fe33844d8c33d8b8b084bde2a6ba205de437a98322dfc4e0f3d\" returns successfully" Jan 29 16:24:39.079876 containerd[1498]: time="2025-01-29T16:24:39.079825041Z" level=info msg="StopPodSandbox for \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\"" Jan 29 16:24:39.080096 containerd[1498]: time="2025-01-29T16:24:39.079977526Z" level=info msg="TearDown network for sandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\" successfully" Jan 29 16:24:39.080096 containerd[1498]: time="2025-01-29T16:24:39.079990327Z" level=info msg="StopPodSandbox for \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\" returns successfully" Jan 29 16:24:39.081202 containerd[1498]: time="2025-01-29T16:24:39.081144444Z" level=info msg="RemovePodSandbox for \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\"" Jan 29 16:24:39.081202 containerd[1498]: time="2025-01-29T16:24:39.081188086Z" level=info msg="Forcibly stopping sandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\"" Jan 29 16:24:39.081335 containerd[1498]: time="2025-01-29T16:24:39.081279329Z" level=info msg="TearDown network for sandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\" successfully" Jan 29 16:24:39.109994 containerd[1498]: time="2025-01-29T16:24:39.109925583Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.110260 containerd[1498]: time="2025-01-29T16:24:39.110055627Z" level=info msg="RemovePodSandbox \"af9eeed6f4fc65a23cc5d3c2149499ebb93bbdf1f28af59f2d5fa329d14293ae\" returns successfully" Jan 29 16:24:39.111271 containerd[1498]: time="2025-01-29T16:24:39.111228546Z" level=info msg="StopPodSandbox for \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\"" Jan 29 16:24:39.112267 containerd[1498]: time="2025-01-29T16:24:39.111386111Z" level=info msg="TearDown network for sandbox \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\" successfully" Jan 29 16:24:39.112267 containerd[1498]: time="2025-01-29T16:24:39.111399391Z" level=info msg="StopPodSandbox for \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\" returns successfully" Jan 29 16:24:39.112843 containerd[1498]: time="2025-01-29T16:24:39.112785476Z" level=info msg="RemovePodSandbox for \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\"" Jan 29 16:24:39.113562 containerd[1498]: time="2025-01-29T16:24:39.112837558Z" level=info msg="Forcibly stopping sandbox \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\"" Jan 29 16:24:39.113654 containerd[1498]: time="2025-01-29T16:24:39.113625704Z" level=info msg="TearDown network for sandbox \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\" successfully" Jan 29 16:24:39.118291 containerd[1498]: time="2025-01-29T16:24:39.118220614Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.118462 containerd[1498]: time="2025-01-29T16:24:39.118313977Z" level=info msg="RemovePodSandbox \"6b76ea7bde1e6cc7ff5fecb0b12bb8b0d27ed9002de851034451b7f00b633043\" returns successfully" Jan 29 16:24:39.118989 containerd[1498]: time="2025-01-29T16:24:39.118930957Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\"" Jan 29 16:24:39.119123 containerd[1498]: time="2025-01-29T16:24:39.119064001Z" level=info msg="TearDown network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" successfully" Jan 29 16:24:39.119123 containerd[1498]: time="2025-01-29T16:24:39.119079922Z" level=info msg="StopPodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" returns successfully" Jan 29 16:24:39.119765 containerd[1498]: time="2025-01-29T16:24:39.119732863Z" level=info msg="RemovePodSandbox for \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\"" Jan 29 16:24:39.119765 containerd[1498]: time="2025-01-29T16:24:39.119765664Z" level=info msg="Forcibly stopping sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\"" Jan 29 16:24:39.120212 containerd[1498]: time="2025-01-29T16:24:39.120172117Z" level=info msg="TearDown network for sandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" successfully" Jan 29 16:24:39.123971 containerd[1498]: time="2025-01-29T16:24:39.123892119Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.123971 containerd[1498]: time="2025-01-29T16:24:39.123977402Z" level=info msg="RemovePodSandbox \"610ec26f8140644a1aec0b3aabb3cb118c85439a30800aaf24efedddea015ca6\" returns successfully" Jan 29 16:24:39.124580 containerd[1498]: time="2025-01-29T16:24:39.124482338Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\"" Jan 29 16:24:39.124652 containerd[1498]: time="2025-01-29T16:24:39.124589942Z" level=info msg="TearDown network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" successfully" Jan 29 16:24:39.124652 containerd[1498]: time="2025-01-29T16:24:39.124600742Z" level=info msg="StopPodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" returns successfully" Jan 29 16:24:39.125291 containerd[1498]: time="2025-01-29T16:24:39.125183921Z" level=info msg="RemovePodSandbox for \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\"" Jan 29 16:24:39.125291 containerd[1498]: time="2025-01-29T16:24:39.125215122Z" level=info msg="Forcibly stopping sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\"" Jan 29 16:24:39.125291 containerd[1498]: time="2025-01-29T16:24:39.125278364Z" level=info msg="TearDown network for sandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" successfully" Jan 29 16:24:39.129692 containerd[1498]: time="2025-01-29T16:24:39.129610505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.129859 containerd[1498]: time="2025-01-29T16:24:39.129739470Z" level=info msg="RemovePodSandbox \"9ebb5ab03ee0df2d79bab9997afd4019de387de8ebeefef8d30f602566a2859d\" returns successfully" Jan 29 16:24:39.130798 containerd[1498]: time="2025-01-29T16:24:39.130701741Z" level=info msg="StopPodSandbox for \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\"" Jan 29 16:24:39.131058 containerd[1498]: time="2025-01-29T16:24:39.130984270Z" level=info msg="TearDown network for sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" successfully" Jan 29 16:24:39.131242 containerd[1498]: time="2025-01-29T16:24:39.131075513Z" level=info msg="StopPodSandbox for \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" returns successfully" Jan 29 16:24:39.131652 containerd[1498]: time="2025-01-29T16:24:39.131539368Z" level=info msg="RemovePodSandbox for \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\"" Jan 29 16:24:39.131652 containerd[1498]: time="2025-01-29T16:24:39.131570809Z" level=info msg="Forcibly stopping sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\"" Jan 29 16:24:39.131652 containerd[1498]: time="2025-01-29T16:24:39.131637691Z" level=info msg="TearDown network for sandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" successfully" Jan 29 16:24:39.135772 containerd[1498]: time="2025-01-29T16:24:39.135673423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.135903 containerd[1498]: time="2025-01-29T16:24:39.135864269Z" level=info msg="RemovePodSandbox \"c1f784dd80bae3ad651597fd65e3e18b9b81723bda0aff58290f4f5881c0c6a6\" returns successfully" Jan 29 16:24:39.136437 containerd[1498]: time="2025-01-29T16:24:39.136403727Z" level=info msg="StopPodSandbox for \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\"" Jan 29 16:24:39.136524 containerd[1498]: time="2025-01-29T16:24:39.136510250Z" level=info msg="TearDown network for sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\" successfully" Jan 29 16:24:39.136524 containerd[1498]: time="2025-01-29T16:24:39.136520411Z" level=info msg="StopPodSandbox for \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\" returns successfully" Jan 29 16:24:39.136845 containerd[1498]: time="2025-01-29T16:24:39.136825181Z" level=info msg="RemovePodSandbox for \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\"" Jan 29 16:24:39.136913 containerd[1498]: time="2025-01-29T16:24:39.136849341Z" level=info msg="Forcibly stopping sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\"" Jan 29 16:24:39.136913 containerd[1498]: time="2025-01-29T16:24:39.136905543Z" level=info msg="TearDown network for sandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\" successfully" Jan 29 16:24:39.140577 containerd[1498]: time="2025-01-29T16:24:39.140532462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.140701 containerd[1498]: time="2025-01-29T16:24:39.140605184Z" level=info msg="RemovePodSandbox \"2bdb0fee8a83403bd702c0833eb0182636a23c68571184b791adc5dd0ebeff26\" returns successfully" Jan 29 16:24:39.141089 containerd[1498]: time="2025-01-29T16:24:39.141065879Z" level=info msg="StopPodSandbox for \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\"" Jan 29 16:24:39.141279 containerd[1498]: time="2025-01-29T16:24:39.141164762Z" level=info msg="TearDown network for sandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\" successfully" Jan 29 16:24:39.141279 containerd[1498]: time="2025-01-29T16:24:39.141175003Z" level=info msg="StopPodSandbox for \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\" returns successfully" Jan 29 16:24:39.141623 containerd[1498]: time="2025-01-29T16:24:39.141595016Z" level=info msg="RemovePodSandbox for \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\"" Jan 29 16:24:39.141623 containerd[1498]: time="2025-01-29T16:24:39.141617297Z" level=info msg="Forcibly stopping sandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\"" Jan 29 16:24:39.141708 containerd[1498]: time="2025-01-29T16:24:39.141672939Z" level=info msg="TearDown network for sandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\" successfully" Jan 29 16:24:39.145470 containerd[1498]: time="2025-01-29T16:24:39.145412461Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.145603 containerd[1498]: time="2025-01-29T16:24:39.145490823Z" level=info msg="RemovePodSandbox \"8ebd2321c9f870546cf16db3a5740fa47149f940457d52deb8df99aac227f775\" returns successfully" Jan 29 16:24:39.147567 containerd[1498]: time="2025-01-29T16:24:39.146369572Z" level=info msg="StopPodSandbox for \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\"" Jan 29 16:24:39.147567 containerd[1498]: time="2025-01-29T16:24:39.146782385Z" level=info msg="TearDown network for sandbox \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\" successfully" Jan 29 16:24:39.147567 containerd[1498]: time="2025-01-29T16:24:39.146844628Z" level=info msg="StopPodSandbox for \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\" returns successfully" Jan 29 16:24:39.156109 containerd[1498]: time="2025-01-29T16:24:39.156023367Z" level=info msg="RemovePodSandbox for \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\"" Jan 29 16:24:39.156744 containerd[1498]: time="2025-01-29T16:24:39.156703949Z" level=info msg="Forcibly stopping sandbox \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\"" Jan 29 16:24:39.157284 containerd[1498]: time="2025-01-29T16:24:39.156932597Z" level=info msg="TearDown network for sandbox \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\" successfully" Jan 29 16:24:39.165154 containerd[1498]: time="2025-01-29T16:24:39.165102703Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.165499 containerd[1498]: time="2025-01-29T16:24:39.165220027Z" level=info msg="RemovePodSandbox \"3b393b04059697302e0d2742550f3c0fe0ac273605e4d323cbea172b12e2f3d7\" returns successfully" Jan 29 16:24:39.166236 containerd[1498]: time="2025-01-29T16:24:39.166046774Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\"" Jan 29 16:24:39.166236 containerd[1498]: time="2025-01-29T16:24:39.166168858Z" level=info msg="TearDown network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" successfully" Jan 29 16:24:39.166236 containerd[1498]: time="2025-01-29T16:24:39.166180938Z" level=info msg="StopPodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" returns successfully" Jan 29 16:24:39.166572 containerd[1498]: time="2025-01-29T16:24:39.166511069Z" level=info msg="RemovePodSandbox for \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\"" Jan 29 16:24:39.166572 containerd[1498]: time="2025-01-29T16:24:39.166541950Z" level=info msg="Forcibly stopping sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\"" Jan 29 16:24:39.166746 containerd[1498]: time="2025-01-29T16:24:39.166604552Z" level=info msg="TearDown network for sandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" successfully" Jan 29 16:24:39.170576 containerd[1498]: time="2025-01-29T16:24:39.170484839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.170872 containerd[1498]: time="2025-01-29T16:24:39.170606123Z" level=info msg="RemovePodSandbox \"038e02f55b3a7eabda5675e5ee068028e908c9be29c76ac6b3fbbeec732187c5\" returns successfully" Jan 29 16:24:39.171386 containerd[1498]: time="2025-01-29T16:24:39.171097339Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\"" Jan 29 16:24:39.171386 containerd[1498]: time="2025-01-29T16:24:39.171202622Z" level=info msg="TearDown network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" successfully" Jan 29 16:24:39.171386 containerd[1498]: time="2025-01-29T16:24:39.171213902Z" level=info msg="StopPodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" returns successfully" Jan 29 16:24:39.171845 containerd[1498]: time="2025-01-29T16:24:39.171763160Z" level=info msg="RemovePodSandbox for \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\"" Jan 29 16:24:39.171845 containerd[1498]: time="2025-01-29T16:24:39.171834403Z" level=info msg="Forcibly stopping sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\"" Jan 29 16:24:39.171938 containerd[1498]: time="2025-01-29T16:24:39.171926926Z" level=info msg="TearDown network for sandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" successfully" Jan 29 16:24:39.175289 containerd[1498]: time="2025-01-29T16:24:39.175225713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.175612 containerd[1498]: time="2025-01-29T16:24:39.175302196Z" level=info msg="RemovePodSandbox \"b820886ec0abbbcd44d96cc37d1439cfb0535cbf53a9f05df6aef363785e5303\" returns successfully" Jan 29 16:24:39.175830 containerd[1498]: time="2025-01-29T16:24:39.175756731Z" level=info msg="StopPodSandbox for \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\"" Jan 29 16:24:39.175898 containerd[1498]: time="2025-01-29T16:24:39.175873294Z" level=info msg="TearDown network for sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" successfully" Jan 29 16:24:39.175898 containerd[1498]: time="2025-01-29T16:24:39.175885175Z" level=info msg="StopPodSandbox for \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" returns successfully" Jan 29 16:24:39.176475 containerd[1498]: time="2025-01-29T16:24:39.176385991Z" level=info msg="RemovePodSandbox for \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\"" Jan 29 16:24:39.176475 containerd[1498]: time="2025-01-29T16:24:39.176414712Z" level=info msg="Forcibly stopping sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\"" Jan 29 16:24:39.176596 containerd[1498]: time="2025-01-29T16:24:39.176490275Z" level=info msg="TearDown network for sandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" successfully" Jan 29 16:24:39.180185 containerd[1498]: time="2025-01-29T16:24:39.180113313Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.180325 containerd[1498]: time="2025-01-29T16:24:39.180271318Z" level=info msg="RemovePodSandbox \"1ed0f406a32c51c7e958ead43be47c735855b6f4a1c81369be5dae594c869906\" returns successfully" Jan 29 16:24:39.181149 containerd[1498]: time="2025-01-29T16:24:39.180907099Z" level=info msg="StopPodSandbox for \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\"" Jan 29 16:24:39.181149 containerd[1498]: time="2025-01-29T16:24:39.181034343Z" level=info msg="TearDown network for sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\" successfully" Jan 29 16:24:39.181149 containerd[1498]: time="2025-01-29T16:24:39.181046063Z" level=info msg="StopPodSandbox for \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\" returns successfully" Jan 29 16:24:39.181614 containerd[1498]: time="2025-01-29T16:24:39.181590321Z" level=info msg="RemovePodSandbox for \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\"" Jan 29 16:24:39.181664 containerd[1498]: time="2025-01-29T16:24:39.181622042Z" level=info msg="Forcibly stopping sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\"" Jan 29 16:24:39.181711 containerd[1498]: time="2025-01-29T16:24:39.181695444Z" level=info msg="TearDown network for sandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\" successfully" Jan 29 16:24:39.185830 containerd[1498]: time="2025-01-29T16:24:39.185736976Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.185830 containerd[1498]: time="2025-01-29T16:24:39.185808419Z" level=info msg="RemovePodSandbox \"3ba782e0fd8eeb757e6e17ec49e452563de46577411216d33ef99d97aa1ea128\" returns successfully" Jan 29 16:24:39.186907 containerd[1498]: time="2025-01-29T16:24:39.186857533Z" level=info msg="StopPodSandbox for \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\"" Jan 29 16:24:39.187113 containerd[1498]: time="2025-01-29T16:24:39.187088580Z" level=info msg="TearDown network for sandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\" successfully" Jan 29 16:24:39.187113 containerd[1498]: time="2025-01-29T16:24:39.187107381Z" level=info msg="StopPodSandbox for \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\" returns successfully" Jan 29 16:24:39.187657 containerd[1498]: time="2025-01-29T16:24:39.187616398Z" level=info msg="RemovePodSandbox for \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\"" Jan 29 16:24:39.187657 containerd[1498]: time="2025-01-29T16:24:39.187656599Z" level=info msg="Forcibly stopping sandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\"" Jan 29 16:24:39.187875 containerd[1498]: time="2025-01-29T16:24:39.187773363Z" level=info msg="TearDown network for sandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\" successfully" Jan 29 16:24:39.191698 containerd[1498]: time="2025-01-29T16:24:39.191573367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.191698 containerd[1498]: time="2025-01-29T16:24:39.191655649Z" level=info msg="RemovePodSandbox \"0ec6603569738fe4181e46aeba276c0ee6d84fa58373d2ba305cf0d61041f0ea\" returns successfully" Jan 29 16:24:39.192888 containerd[1498]: time="2025-01-29T16:24:39.192491997Z" level=info msg="StopPodSandbox for \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\"" Jan 29 16:24:39.192888 containerd[1498]: time="2025-01-29T16:24:39.192611600Z" level=info msg="TearDown network for sandbox \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\" successfully" Jan 29 16:24:39.192888 containerd[1498]: time="2025-01-29T16:24:39.192624361Z" level=info msg="StopPodSandbox for \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\" returns successfully" Jan 29 16:24:39.193330 containerd[1498]: time="2025-01-29T16:24:39.193235901Z" level=info msg="RemovePodSandbox for \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\"" Jan 29 16:24:39.193330 containerd[1498]: time="2025-01-29T16:24:39.193268742Z" level=info msg="Forcibly stopping sandbox \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\"" Jan 29 16:24:39.193450 containerd[1498]: time="2025-01-29T16:24:39.193380466Z" level=info msg="TearDown network for sandbox \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\" successfully" Jan 29 16:24:39.197172 containerd[1498]: time="2025-01-29T16:24:39.197116867Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 16:24:39.198499 containerd[1498]: time="2025-01-29T16:24:39.197200390Z" level=info msg="RemovePodSandbox \"2f690e0fdecc04e0c16c42e442ee2febda28d785f4237c77056c91372b069756\" returns successfully" Jan 29 16:25:04.054427 kubelet[3011]: I0129 16:25:04.053190 3011 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:25:05.146534 kubelet[3011]: I0129 16:25:05.146255 3011 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 16:25:27.653010 systemd[1]: Started sshd@40-167.235.198.80:22-103.142.199.159:53218.service - OpenSSH per-connection server daemon (103.142.199.159:53218). Jan 29 16:25:28.508448 sshd[6061]: Invalid user oracle from 103.142.199.159 port 53218 Jan 29 16:25:28.671920 sshd[6061]: Received disconnect from 103.142.199.159 port 53218:11: Bye Bye [preauth] Jan 29 16:25:28.671920 sshd[6061]: Disconnected from invalid user oracle 103.142.199.159 port 53218 [preauth] Jan 29 16:25:28.674705 systemd[1]: sshd@40-167.235.198.80:22-103.142.199.159:53218.service: Deactivated successfully. Jan 29 16:25:40.941708 systemd[1]: Started sshd@41-167.235.198.80:22-134.122.8.241:52290.service - OpenSSH per-connection server daemon (134.122.8.241:52290). Jan 29 16:25:41.473237 sshd[6088]: Invalid user admin from 134.122.8.241 port 52290 Jan 29 16:25:41.572497 sshd[6088]: Received disconnect from 134.122.8.241 port 52290:11: Bye Bye [preauth] Jan 29 16:25:41.572497 sshd[6088]: Disconnected from invalid user admin 134.122.8.241 port 52290 [preauth] Jan 29 16:25:41.573515 systemd[1]: sshd@41-167.235.198.80:22-134.122.8.241:52290.service: Deactivated successfully. Jan 29 16:25:42.856944 update_engine[1477]: I20250129 16:25:42.855967 1477 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 29 16:25:42.856944 update_engine[1477]: I20250129 16:25:42.856037 1477 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 29 16:25:42.856944 update_engine[1477]: I20250129 16:25:42.856392 1477 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 29 16:25:42.858226 update_engine[1477]: I20250129 16:25:42.858101 1477 omaha_request_params.cc:62] Current group set to alpha Jan 29 16:25:42.859583 update_engine[1477]: I20250129 16:25:42.859287 1477 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 29 16:25:42.859583 update_engine[1477]: I20250129 16:25:42.859320 1477 update_attempter.cc:643] Scheduling an action processor start. Jan 29 16:25:42.859583 update_engine[1477]: I20250129 16:25:42.859365 1477 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 16:25:42.860686 update_engine[1477]: I20250129 16:25:42.859773 1477 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 29 16:25:42.860686 update_engine[1477]: I20250129 16:25:42.859879 1477 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 16:25:42.860686 update_engine[1477]: I20250129 16:25:42.859889 1477 omaha_request_action.cc:272] Request: Jan 29 16:25:42.860686 update_engine[1477]: Jan 29 16:25:42.860686 update_engine[1477]: Jan 29 16:25:42.860686 update_engine[1477]: Jan 29 16:25:42.860686 update_engine[1477]: Jan 29 16:25:42.860686 update_engine[1477]: Jan 29 16:25:42.860686 update_engine[1477]: Jan 29 16:25:42.860686 update_engine[1477]: Jan 29 16:25:42.860686 update_engine[1477]: Jan 29 16:25:42.860686 update_engine[1477]: I20250129 16:25:42.859896 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:25:42.864157 update_engine[1477]: I20250129 16:25:42.864106 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:25:42.864795 update_engine[1477]: I20250129 16:25:42.864763 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:25:42.865800 update_engine[1477]: E20250129 16:25:42.865761 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:25:42.866007 update_engine[1477]: I20250129 16:25:42.865983 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 29 16:25:42.868943 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 29 16:25:52.795965 update_engine[1477]: I20250129 16:25:52.795084 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:25:52.795965 update_engine[1477]: I20250129 16:25:52.795444 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:25:52.795965 update_engine[1477]: I20250129 16:25:52.795776 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:25:52.796739 update_engine[1477]: E20250129 16:25:52.796703 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:25:52.796934 update_engine[1477]: I20250129 16:25:52.796908 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 29 16:26:02.796961 update_engine[1477]: I20250129 16:26:02.796051 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:26:02.796961 update_engine[1477]: I20250129 16:26:02.796401 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:26:02.796961 update_engine[1477]: I20250129 16:26:02.796761 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:26:02.801028 update_engine[1477]: E20250129 16:26:02.797896 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:26:02.801028 update_engine[1477]: I20250129 16:26:02.797981 1477 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 29 16:26:07.618395 systemd[1]: Started sshd@42-167.235.198.80:22-149.50.252.131:35846.service - OpenSSH per-connection server daemon (149.50.252.131:35846). Jan 29 16:26:07.672319 systemd[1]: Started sshd@43-167.235.198.80:22-149.50.252.131:35852.service - OpenSSH per-connection server daemon (149.50.252.131:35852). Jan 29 16:26:07.824369 sshd[6166]: Connection closed by 149.50.252.131 port 35846 [preauth] Jan 29 16:26:07.825792 systemd[1]: sshd@42-167.235.198.80:22-149.50.252.131:35846.service: Deactivated successfully. Jan 29 16:26:07.850406 sshd[6169]: Connection closed by 149.50.252.131 port 35852 [preauth] Jan 29 16:26:07.851981 systemd[1]: sshd@43-167.235.198.80:22-149.50.252.131:35852.service: Deactivated successfully. Jan 29 16:26:12.101389 systemd[1]: run-containerd-runc-k8s.io-77c2618b2621200a4b3cfba4b8df37ddc13a7c5b90d865fb6e450d0d2101ad9f-runc.t5DHWT.mount: Deactivated successfully. Jan 29 16:26:12.791385 update_engine[1477]: I20250129 16:26:12.791045 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:26:12.791385 update_engine[1477]: I20250129 16:26:12.791319 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:26:12.794262 update_engine[1477]: I20250129 16:26:12.794150 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:26:12.795750 update_engine[1477]: E20250129 16:26:12.794533 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:26:12.795750 update_engine[1477]: I20250129 16:26:12.794592 1477 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 16:26:12.795750 update_engine[1477]: I20250129 16:26:12.794602 1477 omaha_request_action.cc:617] Omaha request response: Jan 29 16:26:12.795750 update_engine[1477]: E20250129 16:26:12.794708 1477 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 29 16:26:12.795750 update_engine[1477]: I20250129 16:26:12.794729 1477 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 29 16:26:12.795750 update_engine[1477]: I20250129 16:26:12.794735 1477 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:26:12.795750 update_engine[1477]: I20250129 16:26:12.794741 1477 update_attempter.cc:306] Processing Done. Jan 29 16:26:12.795750 update_engine[1477]: E20250129 16:26:12.794759 1477 update_attempter.cc:619] Update failed. Jan 29 16:26:12.795750 update_engine[1477]: I20250129 16:26:12.794766 1477 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 29 16:26:12.795750 update_engine[1477]: I20250129 16:26:12.794772 1477 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 29 16:26:12.795750 update_engine[1477]: I20250129 16:26:12.794778 1477 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 29 16:26:12.795750 update_engine[1477]: I20250129 16:26:12.794878 1477 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 29 16:26:12.795750 update_engine[1477]: I20250129 16:26:12.794904 1477 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 29 16:26:12.795750 update_engine[1477]: I20250129 16:26:12.794910 1477 omaha_request_action.cc:272] Request: Jan 29 16:26:12.795750 update_engine[1477]: Jan 29 16:26:12.795750 update_engine[1477]: Jan 29 16:26:12.795750 update_engine[1477]: Jan 29 16:26:12.796259 update_engine[1477]: Jan 29 16:26:12.796259 update_engine[1477]: Jan 29 16:26:12.796259 update_engine[1477]: Jan 29 16:26:12.796259 update_engine[1477]: I20250129 16:26:12.794916 1477 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 29 16:26:12.796259 update_engine[1477]: I20250129 16:26:12.795893 1477 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 29 16:26:12.796259 update_engine[1477]: I20250129 16:26:12.796222 1477 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 29 16:26:12.796712 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 29 16:26:12.797065 update_engine[1477]: E20250129 16:26:12.796890 1477 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 29 16:26:12.797065 update_engine[1477]: I20250129 16:26:12.796947 1477 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 29 16:26:12.797065 update_engine[1477]: I20250129 16:26:12.796955 1477 omaha_request_action.cc:617] Omaha request response: Jan 29 16:26:12.797065 update_engine[1477]: I20250129 16:26:12.796963 1477 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:26:12.797065 update_engine[1477]: I20250129 16:26:12.796968 1477 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 29 16:26:12.797065 update_engine[1477]: I20250129 16:26:12.796973 1477 update_attempter.cc:306] Processing Done. Jan 29 16:26:12.797065 update_engine[1477]: I20250129 16:26:12.796980 1477 update_attempter.cc:310] Error event sent. Jan 29 16:26:12.797065 update_engine[1477]: I20250129 16:26:12.796991 1477 update_check_scheduler.cc:74] Next update check in 48m25s Jan 29 16:26:12.797689 locksmithd[1509]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 29 16:26:52.451291 systemd[1]: Started sshd@44-167.235.198.80:22-134.122.8.241:50642.service - OpenSSH per-connection server daemon (134.122.8.241:50642). Jan 29 16:26:52.983776 sshd[6261]: Invalid user pandora from 134.122.8.241 port 50642 Jan 29 16:26:53.081309 sshd[6261]: Received disconnect from 134.122.8.241 port 50642:11: Bye Bye [preauth] Jan 29 16:26:53.081309 sshd[6261]: Disconnected from invalid user pandora 134.122.8.241 port 50642 [preauth] Jan 29 16:26:53.083465 systemd[1]: sshd@44-167.235.198.80:22-134.122.8.241:50642.service: Deactivated successfully. Jan 29 16:27:01.779512 systemd[1]: Started sshd@45-167.235.198.80:22-103.142.199.159:42268.service - OpenSSH per-connection server daemon (103.142.199.159:42268). Jan 29 16:27:02.637955 sshd[6289]: Invalid user xx from 103.142.199.159 port 42268 Jan 29 16:27:02.796139 sshd[6289]: Received disconnect from 103.142.199.159 port 42268:11: Bye Bye [preauth] Jan 29 16:27:02.796139 sshd[6289]: Disconnected from invalid user xx 103.142.199.159 port 42268 [preauth] Jan 29 16:27:02.800141 systemd[1]: sshd@45-167.235.198.80:22-103.142.199.159:42268.service: Deactivated successfully. Jan 29 16:27:24.705150 systemd[1]: run-containerd-runc-k8s.io-a565aaf660a58e4314bff6f47da6ba28297caf860daa54c6d318b5401c053d0c-runc.GOmbmY.mount: Deactivated successfully. Jan 29 16:27:54.698497 systemd[1]: run-containerd-runc-k8s.io-a565aaf660a58e4314bff6f47da6ba28297caf860daa54c6d318b5401c053d0c-runc.VkDWEj.mount: Deactivated successfully. Jan 29 16:28:04.027431 systemd[1]: Started sshd@46-167.235.198.80:22-134.122.8.241:48996.service - OpenSSH per-connection server daemon (134.122.8.241:48996). Jan 29 16:28:04.653877 sshd[6429]: Received disconnect from 134.122.8.241 port 48996:11: Bye Bye [preauth] Jan 29 16:28:04.653877 sshd[6429]: Disconnected from authenticating user root 134.122.8.241 port 48996 [preauth] Jan 29 16:28:04.657410 systemd[1]: sshd@46-167.235.198.80:22-134.122.8.241:48996.service: Deactivated successfully. Jan 29 16:28:15.911343 systemd[1]: Started sshd@47-167.235.198.80:22-149.50.252.131:55648.service - OpenSSH per-connection server daemon (149.50.252.131:55648). Jan 29 16:28:15.941090 systemd[1]: Started sshd@48-167.235.198.80:22-149.50.252.131:55662.service - OpenSSH per-connection server daemon (149.50.252.131:55662). Jan 29 16:28:16.099783 sshd[6453]: Connection closed by 149.50.252.131 port 55648 [preauth] Jan 29 16:28:16.101773 systemd[1]: sshd@47-167.235.198.80:22-149.50.252.131:55648.service: Deactivated successfully. Jan 29 16:28:16.138162 sshd[6455]: Connection closed by 149.50.252.131 port 55662 [preauth] Jan 29 16:28:16.141052 systemd[1]: sshd@48-167.235.198.80:22-149.50.252.131:55662.service: Deactivated successfully. Jan 29 16:28:29.696471 systemd[1]: Started sshd@49-167.235.198.80:22-139.178.68.195:53826.service - OpenSSH per-connection server daemon (139.178.68.195:53826). Jan 29 16:28:30.691610 sshd[6489]: Accepted publickey for core from 139.178.68.195 port 53826 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:28:30.695798 sshd-session[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:30.708323 systemd-logind[1475]: New session 8 of user core. Jan 29 16:28:30.715076 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 16:28:31.496450 sshd[6491]: Connection closed by 139.178.68.195 port 53826 Jan 29 16:28:31.499589 sshd-session[6489]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:31.507078 systemd[1]: sshd@49-167.235.198.80:22-139.178.68.195:53826.service: Deactivated successfully. Jan 29 16:28:31.512800 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 16:28:31.514123 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. Jan 29 16:28:31.519523 systemd-logind[1475]: Removed session 8. Jan 29 16:28:35.798402 systemd[1]: Started sshd@50-167.235.198.80:22-103.142.199.159:47184.service - OpenSSH per-connection server daemon (103.142.199.159:47184). Jan 29 16:28:36.674953 systemd[1]: Started sshd@51-167.235.198.80:22-139.178.68.195:45926.service - OpenSSH per-connection server daemon (139.178.68.195:45926). Jan 29 16:28:36.702697 sshd[6523]: Received disconnect from 103.142.199.159 port 47184:11: Bye Bye [preauth] Jan 29 16:28:36.702697 sshd[6523]: Disconnected from authenticating user root 103.142.199.159 port 47184 [preauth] Jan 29 16:28:36.706362 systemd[1]: sshd@50-167.235.198.80:22-103.142.199.159:47184.service: Deactivated successfully. Jan 29 16:28:37.658136 sshd[6526]: Accepted publickey for core from 139.178.68.195 port 45926 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:28:37.660086 sshd-session[6526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:37.668159 systemd-logind[1475]: New session 9 of user core. Jan 29 16:28:37.677321 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 16:28:38.422876 sshd[6530]: Connection closed by 139.178.68.195 port 45926 Jan 29 16:28:38.423526 sshd-session[6526]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:38.429513 systemd[1]: sshd@51-167.235.198.80:22-139.178.68.195:45926.service: Deactivated successfully. Jan 29 16:28:38.434587 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 16:28:38.439260 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. Jan 29 16:28:38.442859 systemd-logind[1475]: Removed session 9. Jan 29 16:28:43.602516 systemd[1]: Started sshd@52-167.235.198.80:22-139.178.68.195:45934.service - OpenSSH per-connection server daemon (139.178.68.195:45934). Jan 29 16:28:44.588081 sshd[6564]: Accepted publickey for core from 139.178.68.195 port 45934 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:28:44.589642 sshd-session[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:44.597953 systemd-logind[1475]: New session 10 of user core. Jan 29 16:28:44.606175 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 16:28:45.364716 sshd[6566]: Connection closed by 139.178.68.195 port 45934 Jan 29 16:28:45.366261 sshd-session[6564]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:45.371009 systemd[1]: sshd@52-167.235.198.80:22-139.178.68.195:45934.service: Deactivated successfully. Jan 29 16:28:45.373880 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 16:28:45.376606 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. Jan 29 16:28:45.378290 systemd-logind[1475]: Removed session 10. Jan 29 16:28:45.542192 systemd[1]: Started sshd@53-167.235.198.80:22-139.178.68.195:58356.service - OpenSSH per-connection server daemon (139.178.68.195:58356). Jan 29 16:28:46.535852 sshd[6579]: Accepted publickey for core from 139.178.68.195 port 58356 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:28:46.537268 sshd-session[6579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:46.544875 systemd-logind[1475]: New session 11 of user core. Jan 29 16:28:46.550147 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 16:28:47.358801 sshd[6581]: Connection closed by 139.178.68.195 port 58356 Jan 29 16:28:47.359749 sshd-session[6579]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:47.365152 systemd[1]: sshd@53-167.235.198.80:22-139.178.68.195:58356.service: Deactivated successfully. Jan 29 16:28:47.368839 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 16:28:47.371441 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. Jan 29 16:28:47.373301 systemd-logind[1475]: Removed session 11. Jan 29 16:28:47.533322 systemd[1]: Started sshd@54-167.235.198.80:22-139.178.68.195:58370.service - OpenSSH per-connection server daemon (139.178.68.195:58370). Jan 29 16:28:48.510467 sshd[6591]: Accepted publickey for core from 139.178.68.195 port 58370 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:28:48.515237 sshd-session[6591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:48.525253 systemd-logind[1475]: New session 12 of user core. Jan 29 16:28:48.532179 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 16:28:49.280403 sshd[6593]: Connection closed by 139.178.68.195 port 58370 Jan 29 16:28:49.281055 sshd-session[6591]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:49.286993 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 16:28:49.289264 systemd[1]: sshd@54-167.235.198.80:22-139.178.68.195:58370.service: Deactivated successfully. Jan 29 16:28:49.293209 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. Jan 29 16:28:49.297558 systemd-logind[1475]: Removed session 12. Jan 29 16:28:54.456297 systemd[1]: Started sshd@55-167.235.198.80:22-139.178.68.195:58372.service - OpenSSH per-connection server daemon (139.178.68.195:58372). Jan 29 16:28:55.448767 sshd[6609]: Accepted publickey for core from 139.178.68.195 port 58372 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:28:55.452104 sshd-session[6609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:55.460531 systemd-logind[1475]: New session 13 of user core. Jan 29 16:28:55.465086 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 16:28:56.219967 sshd[6632]: Connection closed by 139.178.68.195 port 58372 Jan 29 16:28:56.221060 sshd-session[6609]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:56.227031 systemd[1]: sshd@55-167.235.198.80:22-139.178.68.195:58372.service: Deactivated successfully. Jan 29 16:28:56.231087 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 16:28:56.233287 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. Jan 29 16:28:56.235510 systemd-logind[1475]: Removed session 13. Jan 29 16:28:56.411578 systemd[1]: Started sshd@56-167.235.198.80:22-139.178.68.195:60542.service - OpenSSH per-connection server daemon (139.178.68.195:60542). Jan 29 16:28:57.420869 sshd[6643]: Accepted publickey for core from 139.178.68.195 port 60542 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:28:57.424333 sshd-session[6643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:57.432434 systemd-logind[1475]: New session 14 of user core. Jan 29 16:28:57.442609 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 16:28:58.351290 sshd[6647]: Connection closed by 139.178.68.195 port 60542 Jan 29 16:28:58.354456 sshd-session[6643]: pam_unix(sshd:session): session closed for user core Jan 29 16:28:58.362239 systemd[1]: sshd@56-167.235.198.80:22-139.178.68.195:60542.service: Deactivated successfully. Jan 29 16:28:58.366752 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 16:28:58.368896 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. Jan 29 16:28:58.371151 systemd-logind[1475]: Removed session 14. Jan 29 16:28:58.532341 systemd[1]: Started sshd@57-167.235.198.80:22-139.178.68.195:60558.service - OpenSSH per-connection server daemon (139.178.68.195:60558). Jan 29 16:28:59.517888 sshd[6658]: Accepted publickey for core from 139.178.68.195 port 60558 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:28:59.519361 sshd-session[6658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:28:59.526612 systemd-logind[1475]: New session 15 of user core. Jan 29 16:28:59.532114 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 16:29:02.497922 sshd[6660]: Connection closed by 139.178.68.195 port 60558 Jan 29 16:29:02.500280 sshd-session[6658]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:02.514489 systemd[1]: sshd@57-167.235.198.80:22-139.178.68.195:60558.service: Deactivated successfully. Jan 29 16:29:02.519663 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 16:29:02.520762 systemd[1]: session-15.scope: Consumed 626ms CPU time, 74.2M memory peak. Jan 29 16:29:02.525141 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. Jan 29 16:29:02.527284 systemd-logind[1475]: Removed session 15. Jan 29 16:29:02.683243 systemd[1]: Started sshd@58-167.235.198.80:22-139.178.68.195:60574.service - OpenSSH per-connection server daemon (139.178.68.195:60574). Jan 29 16:29:03.682319 sshd[6677]: Accepted publickey for core from 139.178.68.195 port 60574 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:29:03.684587 sshd-session[6677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:03.694669 systemd-logind[1475]: New session 16 of user core. Jan 29 16:29:03.703118 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 16:29:04.587956 sshd[6679]: Connection closed by 139.178.68.195 port 60574 Jan 29 16:29:04.588731 sshd-session[6677]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:04.594053 systemd[1]: sshd@58-167.235.198.80:22-139.178.68.195:60574.service: Deactivated successfully. Jan 29 16:29:04.598763 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 16:29:04.601141 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. Jan 29 16:29:04.603696 systemd-logind[1475]: Removed session 16. Jan 29 16:29:04.764254 systemd[1]: Started sshd@59-167.235.198.80:22-139.178.68.195:60586.service - OpenSSH per-connection server daemon (139.178.68.195:60586). Jan 29 16:29:05.758018 sshd[6700]: Accepted publickey for core from 139.178.68.195 port 60586 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:29:05.760885 sshd-session[6700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:05.767059 systemd-logind[1475]: New session 17 of user core. Jan 29 16:29:05.771068 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 16:29:06.530888 sshd[6702]: Connection closed by 139.178.68.195 port 60586 Jan 29 16:29:06.535467 sshd-session[6700]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:06.541043 systemd[1]: sshd@59-167.235.198.80:22-139.178.68.195:60586.service: Deactivated successfully. Jan 29 16:29:06.544085 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 16:29:06.546293 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. Jan 29 16:29:06.548111 systemd-logind[1475]: Removed session 17. Jan 29 16:29:11.712222 systemd[1]: Started sshd@60-167.235.198.80:22-139.178.68.195:49264.service - OpenSSH per-connection server daemon (139.178.68.195:49264). Jan 29 16:29:12.711892 sshd[6722]: Accepted publickey for core from 139.178.68.195 port 49264 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:29:12.718218 sshd-session[6722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:12.728526 systemd-logind[1475]: New session 18 of user core. Jan 29 16:29:12.732324 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 16:29:12.793069 systemd[1]: Started sshd@61-167.235.198.80:22-134.122.8.241:47344.service - OpenSSH per-connection server daemon (134.122.8.241:47344). Jan 29 16:29:13.319894 sshd[6745]: Invalid user xx from 134.122.8.241 port 47344 Jan 29 16:29:13.415803 sshd[6745]: Received disconnect from 134.122.8.241 port 47344:11: Bye Bye [preauth] Jan 29 16:29:13.415803 sshd[6745]: Disconnected from invalid user xx 134.122.8.241 port 47344 [preauth] Jan 29 16:29:13.417719 systemd[1]: sshd@61-167.235.198.80:22-134.122.8.241:47344.service: Deactivated successfully. Jan 29 16:29:13.489646 sshd[6743]: Connection closed by 139.178.68.195 port 49264 Jan 29 16:29:13.490441 sshd-session[6722]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:13.496568 systemd[1]: sshd@60-167.235.198.80:22-139.178.68.195:49264.service: Deactivated successfully. Jan 29 16:29:13.500031 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 16:29:13.501430 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. Jan 29 16:29:13.502706 systemd-logind[1475]: Removed session 18. Jan 29 16:29:18.666460 systemd[1]: Started sshd@62-167.235.198.80:22-139.178.68.195:34912.service - OpenSSH per-connection server daemon (139.178.68.195:34912). Jan 29 16:29:19.655059 sshd[6760]: Accepted publickey for core from 139.178.68.195 port 34912 ssh2: RSA SHA256:Hyj0s0Vt6PjOULEmcCMBJSketjS/5JrrtYaO1t9Nhfk Jan 29 16:29:19.657075 sshd-session[6760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 16:29:19.666188 systemd-logind[1475]: New session 19 of user core. Jan 29 16:29:19.672150 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 16:29:20.426717 sshd[6762]: Connection closed by 139.178.68.195 port 34912 Jan 29 16:29:20.426583 sshd-session[6760]: pam_unix(sshd:session): session closed for user core Jan 29 16:29:20.434461 systemd[1]: sshd@62-167.235.198.80:22-139.178.68.195:34912.service: Deactivated successfully. Jan 29 16:29:20.437329 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 16:29:20.441200 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. Jan 29 16:29:20.443800 systemd-logind[1475]: Removed session 19. Jan 29 16:29:35.517798 systemd[1]: cri-containerd-1f3864bd043d8065319611e0a2661ee8ff645a481b363c5977421a08ec5d29ec.scope: Deactivated successfully. Jan 29 16:29:35.520805 systemd[1]: cri-containerd-1f3864bd043d8065319611e0a2661ee8ff645a481b363c5977421a08ec5d29ec.scope: Consumed 6.572s CPU time, 64.5M memory peak, 3.5M read from disk. Jan 29 16:29:35.551884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f3864bd043d8065319611e0a2661ee8ff645a481b363c5977421a08ec5d29ec-rootfs.mount: Deactivated successfully. Jan 29 16:29:35.552234 containerd[1498]: time="2025-01-29T16:29:35.552162873Z" level=info msg="shim disconnected" id=1f3864bd043d8065319611e0a2661ee8ff645a481b363c5977421a08ec5d29ec namespace=k8s.io Jan 29 16:29:35.552234 containerd[1498]: time="2025-01-29T16:29:35.552222075Z" level=warning msg="cleaning up after shim disconnected" id=1f3864bd043d8065319611e0a2661ee8ff645a481b363c5977421a08ec5d29ec namespace=k8s.io Jan 29 16:29:35.552234 containerd[1498]: time="2025-01-29T16:29:35.552230355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:29:35.804965 kubelet[3011]: I0129 16:29:35.804167 3011 scope.go:117] "RemoveContainer" containerID="1f3864bd043d8065319611e0a2661ee8ff645a481b363c5977421a08ec5d29ec" Jan 29 16:29:35.810752 containerd[1498]: time="2025-01-29T16:29:35.810608012Z" level=info msg="CreateContainer within sandbox \"08732233af9dacedbdb85ad5f549581f3cef3562528dae96f62ef0a983d10449\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 29 16:29:35.829671 containerd[1498]: time="2025-01-29T16:29:35.829602581Z" level=info msg="CreateContainer within sandbox \"08732233af9dacedbdb85ad5f549581f3cef3562528dae96f62ef0a983d10449\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0bfc5099470a544b3327fbbe33047fbf454ab0aeeda210074327caeb1f623f4f\"" Jan 29 16:29:35.830244 containerd[1498]: time="2025-01-29T16:29:35.830218639Z" level=info msg="StartContainer for \"0bfc5099470a544b3327fbbe33047fbf454ab0aeeda210074327caeb1f623f4f\"" Jan 29 16:29:35.867290 systemd[1]: Started cri-containerd-0bfc5099470a544b3327fbbe33047fbf454ab0aeeda210074327caeb1f623f4f.scope - libcontainer container 0bfc5099470a544b3327fbbe33047fbf454ab0aeeda210074327caeb1f623f4f. Jan 29 16:29:35.900789 kubelet[3011]: E0129 16:29:35.900721 3011 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51698->10.0.0.2:2379: read: connection timed out" Jan 29 16:29:35.908380 systemd[1]: cri-containerd-a025af42bdb683a44c633cea66e6a1bdd7e7b8b47d8242e7a84750ddbb36b277.scope: Deactivated successfully. Jan 29 16:29:35.908950 systemd[1]: cri-containerd-a025af42bdb683a44c633cea66e6a1bdd7e7b8b47d8242e7a84750ddbb36b277.scope: Consumed 3.031s CPU time, 23.1M memory peak, 2.2M read from disk. Jan 29 16:29:35.937601 containerd[1498]: time="2025-01-29T16:29:35.937550373Z" level=info msg="StartContainer for \"0bfc5099470a544b3327fbbe33047fbf454ab0aeeda210074327caeb1f623f4f\" returns successfully" Jan 29 16:29:35.947030 containerd[1498]: time="2025-01-29T16:29:35.946868852Z" level=info msg="shim disconnected" id=a025af42bdb683a44c633cea66e6a1bdd7e7b8b47d8242e7a84750ddbb36b277 namespace=k8s.io Jan 29 16:29:35.947030 containerd[1498]: time="2025-01-29T16:29:35.946952935Z" level=warning msg="cleaning up after shim disconnected" id=a025af42bdb683a44c633cea66e6a1bdd7e7b8b47d8242e7a84750ddbb36b277 namespace=k8s.io Jan 29 16:29:35.947030 containerd[1498]: time="2025-01-29T16:29:35.946962535Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:29:36.553977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a025af42bdb683a44c633cea66e6a1bdd7e7b8b47d8242e7a84750ddbb36b277-rootfs.mount: Deactivated successfully. Jan 29 16:29:36.799575 kubelet[3011]: I0129 16:29:36.797749 3011 scope.go:117] "RemoveContainer" containerID="a025af42bdb683a44c633cea66e6a1bdd7e7b8b47d8242e7a84750ddbb36b277" Jan 29 16:29:36.802673 containerd[1498]: time="2025-01-29T16:29:36.802453920Z" level=info msg="CreateContainer within sandbox \"20ee849a4cd440f97b06077ca301f79821a5e1002421745209d8bed915b62cc2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 29 16:29:36.828926 containerd[1498]: time="2025-01-29T16:29:36.826656885Z" level=info msg="CreateContainer within sandbox \"20ee849a4cd440f97b06077ca301f79821a5e1002421745209d8bed915b62cc2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b929200bf494a5e689b037a4d3eb158babb5928e8e1bdae68e7e50e6923aefd8\"" Jan 29 16:29:36.828926 containerd[1498]: time="2025-01-29T16:29:36.827264464Z" level=info msg="StartContainer for \"b929200bf494a5e689b037a4d3eb158babb5928e8e1bdae68e7e50e6923aefd8\"" Jan 29 16:29:36.889712 systemd[1]: Started cri-containerd-b929200bf494a5e689b037a4d3eb158babb5928e8e1bdae68e7e50e6923aefd8.scope - libcontainer container b929200bf494a5e689b037a4d3eb158babb5928e8e1bdae68e7e50e6923aefd8. Jan 29 16:29:36.892857 systemd[1]: cri-containerd-54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053.scope: Deactivated successfully. Jan 29 16:29:36.893217 systemd[1]: cri-containerd-54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053.scope: Consumed 7.071s CPU time, 46.1M memory peak. Jan 29 16:29:36.943957 containerd[1498]: time="2025-01-29T16:29:36.943653470Z" level=info msg="shim disconnected" id=54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053 namespace=k8s.io Jan 29 16:29:36.943957 containerd[1498]: time="2025-01-29T16:29:36.943732472Z" level=warning msg="cleaning up after shim disconnected" id=54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053 namespace=k8s.io Jan 29 16:29:36.943957 containerd[1498]: time="2025-01-29T16:29:36.943741073Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:29:36.992411 containerd[1498]: time="2025-01-29T16:29:36.992077841Z" level=info msg="StartContainer for \"b929200bf494a5e689b037a4d3eb158babb5928e8e1bdae68e7e50e6923aefd8\" returns successfully" Jan 29 16:29:37.553573 systemd[1]: run-containerd-runc-k8s.io-b929200bf494a5e689b037a4d3eb158babb5928e8e1bdae68e7e50e6923aefd8-runc.UPKdZS.mount: Deactivated successfully. Jan 29 16:29:37.553985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053-rootfs.mount: Deactivated successfully. Jan 29 16:29:37.811711 kubelet[3011]: I0129 16:29:37.811483 3011 scope.go:117] "RemoveContainer" containerID="54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053" Jan 29 16:29:37.817540 containerd[1498]: time="2025-01-29T16:29:37.816907438Z" level=info msg="CreateContainer within sandbox \"7455dfb3385e8ff19cff9a4e0a120c519d2351ff4d7a847240bc3edd22008ac6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 29 16:29:37.840538 containerd[1498]: time="2025-01-29T16:29:37.836856396Z" level=info msg="CreateContainer within sandbox \"7455dfb3385e8ff19cff9a4e0a120c519d2351ff4d7a847240bc3edd22008ac6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a3bbe1233c318c3ad0a5b2ba5f4f4fd88955de85635107c2a069d04267cfbc2a\"" Jan 29 16:29:37.840538 containerd[1498]: time="2025-01-29T16:29:37.839226027Z" level=info msg="StartContainer for \"a3bbe1233c318c3ad0a5b2ba5f4f4fd88955de85635107c2a069d04267cfbc2a\"" Jan 29 16:29:37.896045 systemd[1]: Started cri-containerd-a3bbe1233c318c3ad0a5b2ba5f4f4fd88955de85635107c2a069d04267cfbc2a.scope - libcontainer container a3bbe1233c318c3ad0a5b2ba5f4f4fd88955de85635107c2a069d04267cfbc2a. Jan 29 16:29:38.169538 containerd[1498]: time="2025-01-29T16:29:38.168748343Z" level=info msg="StartContainer for \"a3bbe1233c318c3ad0a5b2ba5f4f4fd88955de85635107c2a069d04267cfbc2a\" returns successfully" Jan 29 16:29:39.931469 kubelet[3011]: E0129 16:29:39.930681 3011 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51472->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4230-0-0-e-139a7b6c18.181f36bdae6c49d2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4230-0-0-e-139a7b6c18,UID:65a5389adc32590f2f0bf8e7a7caad99,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4230-0-0-e-139a7b6c18,},FirstTimestamp:2025-01-29 16:29:29.491352018 +0000 UTC m=+350.700777089,LastTimestamp:2025-01-29 16:29:29.491352018 +0000 UTC m=+350.700777089,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-0-e-139a7b6c18,}" Jan 29 16:29:40.259753 systemd[1]: cri-containerd-a3bbe1233c318c3ad0a5b2ba5f4f4fd88955de85635107c2a069d04267cfbc2a.scope: Deactivated successfully. Jan 29 16:29:40.284670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3bbe1233c318c3ad0a5b2ba5f4f4fd88955de85635107c2a069d04267cfbc2a-rootfs.mount: Deactivated successfully. Jan 29 16:29:40.292514 containerd[1498]: time="2025-01-29T16:29:40.292337783Z" level=info msg="shim disconnected" id=a3bbe1233c318c3ad0a5b2ba5f4f4fd88955de85635107c2a069d04267cfbc2a namespace=k8s.io Jan 29 16:29:40.292514 containerd[1498]: time="2025-01-29T16:29:40.292473547Z" level=warning msg="cleaning up after shim disconnected" id=a3bbe1233c318c3ad0a5b2ba5f4f4fd88955de85635107c2a069d04267cfbc2a namespace=k8s.io Jan 29 16:29:40.292514 containerd[1498]: time="2025-01-29T16:29:40.292483627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 16:29:40.825743 kubelet[3011]: I0129 16:29:40.825689 3011 scope.go:117] "RemoveContainer" containerID="54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053" Jan 29 16:29:40.826082 kubelet[3011]: I0129 16:29:40.826040 3011 scope.go:117] "RemoveContainer" containerID="a3bbe1233c318c3ad0a5b2ba5f4f4fd88955de85635107c2a069d04267cfbc2a" Jan 29 16:29:40.827065 kubelet[3011]: E0129 16:29:40.826293 3011 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7bc55997bb-vt7fd_tigera-operator(6b039cb0-80b5-44ff-80ac-d0f97ff75f32)\"" pod="tigera-operator/tigera-operator-7bc55997bb-vt7fd" podUID="6b039cb0-80b5-44ff-80ac-d0f97ff75f32" Jan 29 16:29:40.831201 containerd[1498]: time="2025-01-29T16:29:40.830263521Z" level=info msg="RemoveContainer for \"54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053\"" Jan 29 16:29:40.838140 containerd[1498]: time="2025-01-29T16:29:40.837124127Z" level=info msg="RemoveContainer for \"54f034850d4d9b69cb54654f76796a33a56ca6ea458f7e2c835febcdcc2ff053\" returns successfully"