Feb 13 20:01:34.879305 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 20:01:34.879356 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Feb 13 18:13:29 -00 2025 Feb 13 20:01:34.879381 kernel: KASLR enabled Feb 13 20:01:34.879396 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Feb 13 20:01:34.879412 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x138595418 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Feb 13 20:01:34.879427 kernel: random: crng init done Feb 13 20:01:34.879445 kernel: ACPI: Early table checksum verification disabled Feb 13 20:01:34.879460 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Feb 13 20:01:34.879476 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Feb 13 20:01:34.879495 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:01:34.879511 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:01:34.879526 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:01:34.879542 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:01:34.879557 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:01:34.879577 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:01:34.879626 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:01:34.879644 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:01:34.879661 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:01:34.879678 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 20:01:34.879694 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Feb 13 20:01:34.879711 kernel: NUMA: Failed to initialise from firmware Feb 13 20:01:34.879728 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 20:01:34.879744 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Feb 13 20:01:34.879761 kernel: Zone ranges: Feb 13 20:01:34.879777 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 13 20:01:34.879796 kernel: DMA32 empty Feb 13 20:01:34.879813 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Feb 13 20:01:34.879829 kernel: Movable zone start for each node Feb 13 20:01:34.879846 kernel: Early memory node ranges Feb 13 20:01:34.879863 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Feb 13 20:01:34.879879 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Feb 13 20:01:34.879896 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Feb 13 20:01:34.879912 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Feb 13 20:01:34.879929 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Feb 13 20:01:34.879945 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Feb 13 20:01:34.879962 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Feb 13 20:01:34.879979 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Feb 13 20:01:34.879999 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Feb 13 20:01:34.880016 kernel: psci: probing for conduit method from ACPI. Feb 13 20:01:34.880066 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 20:01:34.880109 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 20:01:34.880127 kernel: psci: Trusted OS migration not required Feb 13 20:01:34.880145 kernel: psci: SMC Calling Convention v1.1 Feb 13 20:01:34.880167 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 20:01:34.880185 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 20:01:34.880204 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 20:01:34.880222 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 13 20:01:34.880240 kernel: Detected PIPT I-cache on CPU0 Feb 13 20:01:34.880258 kernel: CPU features: detected: GIC system register CPU interface Feb 13 20:01:34.880275 kernel: CPU features: detected: Hardware dirty bit management Feb 13 20:01:34.880293 kernel: CPU features: detected: Spectre-v4 Feb 13 20:01:34.880310 kernel: CPU features: detected: Spectre-BHB Feb 13 20:01:34.880328 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 20:01:34.880349 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 20:01:34.880367 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 20:01:34.880384 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 20:01:34.880402 kernel: alternatives: applying boot alternatives Feb 13 20:01:34.880423 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:01:34.880442 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:01:34.880460 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:01:34.880478 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:01:34.880495 kernel: Fallback order for Node 0: 0 Feb 13 20:01:34.880513 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Feb 13 20:01:34.880531 kernel: Policy zone: Normal Feb 13 20:01:34.880552 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:01:34.880569 kernel: software IO TLB: area num 2. Feb 13 20:01:34.880587 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Feb 13 20:01:34.880606 kernel: Memory: 3882936K/4096000K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 213064K reserved, 0K cma-reserved) Feb 13 20:01:34.880625 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:01:34.880642 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:01:34.880661 kernel: rcu: RCU event tracing is enabled. Feb 13 20:01:34.880679 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:01:34.880697 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:01:34.880715 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:01:34.880733 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:01:34.880754 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:01:34.880772 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 20:01:34.880789 kernel: GICv3: 256 SPIs implemented Feb 13 20:01:34.880807 kernel: GICv3: 0 Extended SPIs implemented Feb 13 20:01:34.880824 kernel: Root IRQ handler: gic_handle_irq Feb 13 20:01:34.880842 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 20:01:34.880860 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 20:01:34.880877 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 20:01:34.880895 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 20:01:34.880914 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 20:01:34.880931 kernel: GICv3: using LPI property table @0x00000001000e0000 Feb 13 20:01:34.880949 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Feb 13 20:01:34.880970 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:01:34.880988 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:01:34.881006 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 20:01:34.883087 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 20:01:34.883130 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 20:01:34.883139 kernel: Console: colour dummy device 80x25 Feb 13 20:01:34.883147 kernel: ACPI: Core revision 20230628 Feb 13 20:01:34.883155 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 20:01:34.883163 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:01:34.883172 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:01:34.883190 kernel: landlock: Up and running. Feb 13 20:01:34.883199 kernel: SELinux: Initializing. Feb 13 20:01:34.883207 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:01:34.883215 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 20:01:34.883223 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:01:34.883234 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:01:34.883242 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:01:34.883252 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:01:34.883260 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 20:01:34.883269 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 20:01:34.883277 kernel: Remapping and enabling EFI services. Feb 13 20:01:34.883285 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:01:34.883293 kernel: Detected PIPT I-cache on CPU1 Feb 13 20:01:34.883300 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 20:01:34.883308 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Feb 13 20:01:34.883316 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 20:01:34.883324 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 20:01:34.883332 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:01:34.883339 kernel: SMP: Total of 2 processors activated. Feb 13 20:01:34.883349 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 20:01:34.883357 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 20:01:34.883371 kernel: CPU features: detected: Common not Private translations Feb 13 20:01:34.883381 kernel: CPU features: detected: CRC32 instructions Feb 13 20:01:34.883389 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 20:01:34.883397 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 20:01:34.883405 kernel: CPU features: detected: LSE atomic instructions Feb 13 20:01:34.883413 kernel: CPU features: detected: Privileged Access Never Feb 13 20:01:34.883421 kernel: CPU features: detected: RAS Extension Support Feb 13 20:01:34.883431 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 20:01:34.883439 kernel: CPU: All CPU(s) started at EL1 Feb 13 20:01:34.883448 kernel: alternatives: applying system-wide alternatives Feb 13 20:01:34.883456 kernel: devtmpfs: initialized Feb 13 20:01:34.883464 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:01:34.883472 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:01:34.883480 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:01:34.883490 kernel: SMBIOS 3.0.0 present. Feb 13 20:01:34.883498 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Feb 13 20:01:34.883506 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:01:34.883514 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 20:01:34.883523 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 20:01:34.883531 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 20:01:34.883539 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:01:34.883548 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Feb 13 20:01:34.883556 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:01:34.883566 kernel: cpuidle: using governor menu Feb 13 20:01:34.883574 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 20:01:34.883583 kernel: ASID allocator initialised with 32768 entries Feb 13 20:01:34.883591 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:01:34.883599 kernel: Serial: AMBA PL011 UART driver Feb 13 20:01:34.883607 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 20:01:34.883616 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 20:01:34.883624 kernel: Modules: 509040 pages in range for PLT usage Feb 13 20:01:34.883632 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:01:34.883641 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:01:34.883649 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 20:01:34.883658 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 20:01:34.883666 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:01:34.883674 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:01:34.883682 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 20:01:34.883691 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 20:01:34.883699 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:01:34.883707 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:01:34.883717 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:01:34.883725 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:01:34.883734 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:01:34.883742 kernel: ACPI: Interpreter enabled Feb 13 20:01:34.883750 kernel: ACPI: Using GIC for interrupt routing Feb 13 20:01:34.883759 kernel: ACPI: MCFG table detected, 1 entries Feb 13 20:01:34.883767 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 20:01:34.883775 kernel: printk: console [ttyAMA0] enabled Feb 13 20:01:34.883783 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:01:34.883956 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:01:34.884065 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:01:34.884213 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:01:34.884304 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 20:01:34.884378 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 20:01:34.884389 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 20:01:34.884399 kernel: PCI host bridge to bus 0000:00 Feb 13 20:01:34.884480 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 20:01:34.884556 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 20:01:34.884622 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 20:01:34.884691 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:01:34.884781 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 20:01:34.884865 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Feb 13 20:01:34.884942 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Feb 13 20:01:34.885021 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 20:01:34.887313 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 20:01:34.887390 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Feb 13 20:01:34.887470 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 20:01:34.887539 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Feb 13 20:01:34.887614 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 20:01:34.887690 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Feb 13 20:01:34.887765 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 20:01:34.887834 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Feb 13 20:01:34.887909 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 20:01:34.887977 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Feb 13 20:01:34.890158 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 20:01:34.890258 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Feb 13 20:01:34.890333 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 20:01:34.890398 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Feb 13 20:01:34.890471 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 20:01:34.890537 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Feb 13 20:01:34.890607 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Feb 13 20:01:34.890676 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Feb 13 20:01:34.890754 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Feb 13 20:01:34.890820 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Feb 13 20:01:34.890895 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 20:01:34.890965 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Feb 13 20:01:34.891048 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:01:34.891139 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 20:01:34.891223 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 20:01:34.891291 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Feb 13 20:01:34.891366 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Feb 13 20:01:34.891434 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Feb 13 20:01:34.891502 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Feb 13 20:01:34.891579 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Feb 13 20:01:34.891654 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Feb 13 20:01:34.891729 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 20:01:34.891799 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Feb 13 20:01:34.891868 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Feb 13 20:01:34.891943 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Feb 13 20:01:34.892011 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Feb 13 20:01:34.892143 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 20:01:34.892221 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Feb 13 20:01:34.892288 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Feb 13 20:01:34.892356 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Feb 13 20:01:34.892423 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Feb 13 20:01:34.892491 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Feb 13 20:01:34.892558 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Feb 13 20:01:34.892627 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Feb 13 20:01:34.892694 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Feb 13 20:01:34.892759 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Feb 13 20:01:34.892828 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Feb 13 20:01:34.892897 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 20:01:34.892963 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Feb 13 20:01:34.894505 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Feb 13 20:01:34.894630 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 20:01:34.894702 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Feb 13 20:01:34.894768 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Feb 13 20:01:34.894839 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 20:01:34.894908 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Feb 13 20:01:34.894973 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Feb 13 20:01:34.895114 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 20:01:34.895193 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Feb 13 20:01:34.895265 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Feb 13 20:01:34.895333 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 20:01:34.895399 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Feb 13 20:01:34.895465 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Feb 13 20:01:34.895535 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 20:01:34.895600 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Feb 13 20:01:34.895665 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Feb 13 20:01:34.895738 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 20:01:34.895803 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Feb 13 20:01:34.895868 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Feb 13 20:01:34.895934 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Feb 13 20:01:34.896001 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 20:01:34.898011 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Feb 13 20:01:34.898172 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 20:01:34.898255 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Feb 13 20:01:34.898320 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 20:01:34.898387 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Feb 13 20:01:34.898471 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 20:01:34.898542 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Feb 13 20:01:34.898607 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 20:01:34.898674 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Feb 13 20:01:34.898741 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 20:01:34.898809 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Feb 13 20:01:34.898874 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 20:01:34.898941 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Feb 13 20:01:34.899006 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 20:01:34.900246 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Feb 13 20:01:34.900328 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 20:01:34.900406 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Feb 13 20:01:34.900471 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Feb 13 20:01:34.900538 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Feb 13 20:01:34.900602 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 20:01:34.900671 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Feb 13 20:01:34.900738 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 20:01:34.900816 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Feb 13 20:01:34.900885 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 20:01:34.900955 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Feb 13 20:01:34.901020 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 20:01:34.901920 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Feb 13 20:01:34.901989 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 20:01:34.902109 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Feb 13 20:01:34.902181 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 20:01:34.902249 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Feb 13 20:01:34.902315 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 20:01:34.902389 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Feb 13 20:01:34.902454 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 20:01:34.902521 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Feb 13 20:01:34.902585 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Feb 13 20:01:34.902655 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Feb 13 20:01:34.902729 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Feb 13 20:01:34.902797 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 20:01:34.902867 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Feb 13 20:01:34.902934 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Feb 13 20:01:34.902999 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 20:01:34.904174 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Feb 13 20:01:34.904257 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 20:01:34.904333 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Feb 13 20:01:34.904409 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Feb 13 20:01:34.904474 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 20:01:34.904537 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Feb 13 20:01:34.904601 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 20:01:34.904675 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Feb 13 20:01:34.904744 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Feb 13 20:01:34.904811 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Feb 13 20:01:34.904879 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 20:01:34.904943 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Feb 13 20:01:34.905006 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 20:01:34.906411 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Feb 13 20:01:34.906495 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Feb 13 20:01:34.906560 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 20:01:34.906625 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Feb 13 20:01:34.906690 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 20:01:34.906770 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Feb 13 20:01:34.906839 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Feb 13 20:01:34.906906 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Feb 13 20:01:34.906972 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 20:01:34.908147 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Feb 13 20:01:34.908247 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 20:01:34.908321 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Feb 13 20:01:34.908390 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Feb 13 20:01:34.908465 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Feb 13 20:01:34.908531 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 20:01:34.908596 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Feb 13 20:01:34.908661 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 20:01:34.908735 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Feb 13 20:01:34.908803 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Feb 13 20:01:34.908870 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Feb 13 20:01:34.908940 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Feb 13 20:01:34.909005 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 20:01:34.910298 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Feb 13 20:01:34.910397 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 20:01:34.910483 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Feb 13 20:01:34.910561 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 20:01:34.910630 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Feb 13 20:01:34.910693 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 20:01:34.911565 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Feb 13 20:01:34.911647 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Feb 13 20:01:34.911712 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Feb 13 20:01:34.911776 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 20:01:34.911842 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 20:01:34.911899 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 20:01:34.911956 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 20:01:34.912037 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 20:01:34.912125 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Feb 13 20:01:34.912189 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Feb 13 20:01:34.912256 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Feb 13 20:01:34.912316 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Feb 13 20:01:34.912385 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Feb 13 20:01:34.912457 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Feb 13 20:01:34.912520 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Feb 13 20:01:34.912582 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Feb 13 20:01:34.912662 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Feb 13 20:01:34.912724 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Feb 13 20:01:34.912784 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Feb 13 20:01:34.912851 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Feb 13 20:01:34.912912 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Feb 13 20:01:34.912975 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Feb 13 20:01:34.914652 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Feb 13 20:01:34.914741 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Feb 13 20:01:34.914809 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Feb 13 20:01:34.914889 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Feb 13 20:01:34.914952 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Feb 13 20:01:34.915016 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Feb 13 20:01:34.916215 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Feb 13 20:01:34.916304 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Feb 13 20:01:34.916387 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Feb 13 20:01:34.916464 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Feb 13 20:01:34.916538 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Feb 13 20:01:34.916605 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Feb 13 20:01:34.916616 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 20:01:34.916625 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 20:01:34.916633 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 20:01:34.916642 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 20:01:34.916650 kernel: iommu: Default domain type: Translated Feb 13 20:01:34.916659 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 20:01:34.916669 kernel: efivars: Registered efivars operations Feb 13 20:01:34.916678 kernel: vgaarb: loaded Feb 13 20:01:34.916686 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 20:01:34.916694 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:01:34.916703 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:01:34.916711 kernel: pnp: PnP ACPI init Feb 13 20:01:34.916787 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 20:01:34.916799 kernel: pnp: PnP ACPI: found 1 devices Feb 13 20:01:34.916810 kernel: NET: Registered PF_INET protocol family Feb 13 20:01:34.916818 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:01:34.916827 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 20:01:34.916835 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:01:34.916844 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:01:34.916852 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 20:01:34.916861 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 20:01:34.916869 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:01:34.916877 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 20:01:34.916887 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:01:34.916967 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Feb 13 20:01:34.916979 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:01:34.916988 kernel: kvm [1]: HYP mode not available Feb 13 20:01:34.916996 kernel: Initialise system trusted keyrings Feb 13 20:01:34.917004 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 20:01:34.917012 kernel: Key type asymmetric registered Feb 13 20:01:34.917021 kernel: Asymmetric key parser 'x509' registered Feb 13 20:01:34.917048 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 20:01:34.917059 kernel: io scheduler mq-deadline registered Feb 13 20:01:34.917101 kernel: io scheduler kyber registered Feb 13 20:01:34.917113 kernel: io scheduler bfq registered Feb 13 20:01:34.917122 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 13 20:01:34.917218 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Feb 13 20:01:34.917295 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Feb 13 20:01:34.917367 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:01:34.917441 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Feb 13 20:01:34.917515 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Feb 13 20:01:34.917584 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:01:34.917656 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Feb 13 20:01:34.917726 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Feb 13 20:01:34.917796 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:01:34.917868 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Feb 13 20:01:34.917940 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Feb 13 20:01:34.918012 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:01:34.918143 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Feb 13 20:01:34.918227 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Feb 13 20:01:34.918297 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:01:34.918370 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Feb 13 20:01:34.918442 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Feb 13 20:01:34.918510 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:01:34.918581 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Feb 13 20:01:34.918650 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Feb 13 20:01:34.918718 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:01:34.918789 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Feb 13 20:01:34.918861 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Feb 13 20:01:34.918930 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:01:34.918942 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Feb 13 20:01:34.919012 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Feb 13 20:01:34.919419 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Feb 13 20:01:34.919500 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:01:34.919517 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 20:01:34.919526 kernel: ACPI: button: Power Button [PWRB] Feb 13 20:01:34.919535 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 20:01:34.919613 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Feb 13 20:01:34.919691 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Feb 13 20:01:34.919703 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:01:34.919712 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 13 20:01:34.919782 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Feb 13 20:01:34.919797 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Feb 13 20:01:34.919806 kernel: thunder_xcv, ver 1.0 Feb 13 20:01:34.919814 kernel: thunder_bgx, ver 1.0 Feb 13 20:01:34.919822 kernel: nicpf, ver 1.0 Feb 13 20:01:34.919831 kernel: nicvf, ver 1.0 Feb 13 20:01:34.919912 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 20:01:34.919980 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T20:01:34 UTC (1739476894) Feb 13 20:01:34.919991 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:01:34.920002 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 20:01:34.920011 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 20:01:34.920019 kernel: watchdog: Hard watchdog permanently disabled Feb 13 20:01:34.920127 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:01:34.920138 kernel: Segment Routing with IPv6 Feb 13 20:01:34.920161 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:01:34.920170 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:01:34.920178 kernel: Key type dns_resolver registered Feb 13 20:01:34.920187 kernel: registered taskstats version 1 Feb 13 20:01:34.920200 kernel: Loading compiled-in X.509 certificates Feb 13 20:01:34.920209 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 8bd805622262697b24b0fa7c407ae82c4289ceec' Feb 13 20:01:34.920218 kernel: Key type .fscrypt registered Feb 13 20:01:34.920226 kernel: Key type fscrypt-provisioning registered Feb 13 20:01:34.920235 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:01:34.920243 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:01:34.920252 kernel: ima: No architecture policies found Feb 13 20:01:34.920261 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 20:01:34.920269 kernel: clk: Disabling unused clocks Feb 13 20:01:34.920280 kernel: Freeing unused kernel memory: 39360K Feb 13 20:01:34.922091 kernel: Run /init as init process Feb 13 20:01:34.922121 kernel: with arguments: Feb 13 20:01:34.922130 kernel: /init Feb 13 20:01:34.922138 kernel: with environment: Feb 13 20:01:34.922146 kernel: HOME=/ Feb 13 20:01:34.922154 kernel: TERM=linux Feb 13 20:01:34.922162 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:01:34.922172 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:01:34.922190 systemd[1]: Detected virtualization kvm. Feb 13 20:01:34.922198 systemd[1]: Detected architecture arm64. Feb 13 20:01:34.922207 systemd[1]: Running in initrd. Feb 13 20:01:34.922214 systemd[1]: No hostname configured, using default hostname. Feb 13 20:01:34.922222 systemd[1]: Hostname set to . Feb 13 20:01:34.922231 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:01:34.922239 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:01:34.922249 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:01:34.922258 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:01:34.922269 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:01:34.922278 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:01:34.922286 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:01:34.922295 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:01:34.922305 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:01:34.922315 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:01:34.922323 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:01:34.922332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:01:34.922340 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:01:34.922348 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:01:34.922357 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:01:34.922365 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:01:34.922374 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:01:34.922385 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:01:34.922393 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:01:34.922402 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:01:34.922410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:01:34.922419 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:01:34.922427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:01:34.922435 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:01:34.922444 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:01:34.922452 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:01:34.922462 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:01:34.922471 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:01:34.922480 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:01:34.922488 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:01:34.922496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:01:34.922506 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:01:34.922550 systemd-journald[236]: Collecting audit messages is disabled. Feb 13 20:01:34.922573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:01:34.922582 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:01:34.922591 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:01:34.922600 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:01:34.922609 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:01:34.922618 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:01:34.922626 kernel: Bridge firewalling registered Feb 13 20:01:34.922634 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:01:34.922642 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:01:34.922652 systemd-journald[236]: Journal started Feb 13 20:01:34.922672 systemd-journald[236]: Runtime Journal (/run/log/journal/259b724ddb5c487a9f2363ae07432029) is 8.0M, max 76.6M, 68.6M free. Feb 13 20:01:34.896504 systemd-modules-load[237]: Inserted module 'overlay' Feb 13 20:01:34.924185 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:01:34.914910 systemd-modules-load[237]: Inserted module 'br_netfilter' Feb 13 20:01:34.932634 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:01:34.936047 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:01:34.938588 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:01:34.942061 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:01:34.945001 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:01:34.951278 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:01:34.953740 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:01:34.968180 dracut-cmdline[270]: dracut-dracut-053 Feb 13 20:01:34.971555 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c15c751c06cfb933aa98417326b93d899c08a83ce060a940cd01082629c201a7 Feb 13 20:01:34.972083 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:01:34.980267 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:01:35.004307 systemd-resolved[287]: Positive Trust Anchors: Feb 13 20:01:35.004325 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:01:35.004362 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:01:35.009316 systemd-resolved[287]: Defaulting to hostname 'linux'. Feb 13 20:01:35.011108 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:01:35.011741 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:01:35.054056 kernel: SCSI subsystem initialized Feb 13 20:01:35.060045 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:01:35.066056 kernel: iscsi: registered transport (tcp) Feb 13 20:01:35.080200 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:01:35.080265 kernel: QLogic iSCSI HBA Driver Feb 13 20:01:35.130486 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:01:35.139289 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:01:35.158391 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:01:35.158467 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:01:35.159054 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:01:35.209102 kernel: raid6: neonx8 gen() 15662 MB/s Feb 13 20:01:35.226118 kernel: raid6: neonx4 gen() 15590 MB/s Feb 13 20:01:35.243086 kernel: raid6: neonx2 gen() 13152 MB/s Feb 13 20:01:35.260093 kernel: raid6: neonx1 gen() 10435 MB/s Feb 13 20:01:35.277080 kernel: raid6: int64x8 gen() 6921 MB/s Feb 13 20:01:35.294096 kernel: raid6: int64x4 gen() 7294 MB/s Feb 13 20:01:35.311127 kernel: raid6: int64x2 gen() 6096 MB/s Feb 13 20:01:35.328097 kernel: raid6: int64x1 gen() 5031 MB/s Feb 13 20:01:35.328179 kernel: raid6: using algorithm neonx8 gen() 15662 MB/s Feb 13 20:01:35.345122 kernel: raid6: .... xor() 11865 MB/s, rmw enabled Feb 13 20:01:35.345206 kernel: raid6: using neon recovery algorithm Feb 13 20:01:35.350230 kernel: xor: measuring software checksum speed Feb 13 20:01:35.350293 kernel: 8regs : 19778 MB/sec Feb 13 20:01:35.350308 kernel: 32regs : 19679 MB/sec Feb 13 20:01:35.350321 kernel: arm64_neon : 27087 MB/sec Feb 13 20:01:35.351076 kernel: xor: using function: arm64_neon (27087 MB/sec) Feb 13 20:01:35.401146 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:01:35.416238 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:01:35.423249 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:01:35.450353 systemd-udevd[457]: Using default interface naming scheme 'v255'. Feb 13 20:01:35.453843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:01:35.462234 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:01:35.475444 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Feb 13 20:01:35.512001 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:01:35.520279 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:01:35.580894 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:01:35.588597 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:01:35.607133 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:01:35.607805 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:01:35.609212 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:01:35.610284 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:01:35.615240 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:01:35.639256 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:01:35.676767 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:01:35.681326 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 20:01:35.681423 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Feb 13 20:01:35.713255 kernel: ACPI: bus type USB registered Feb 13 20:01:35.713316 kernel: usbcore: registered new interface driver usbfs Feb 13 20:01:35.713335 kernel: usbcore: registered new interface driver hub Feb 13 20:01:35.713345 kernel: usbcore: registered new device driver usb Feb 13 20:01:35.716372 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:01:35.716490 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:01:35.718168 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:01:35.719012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:01:35.719219 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:01:35.722704 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:01:35.736291 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:01:35.739955 kernel: sr 0:0:0:0: Power-on or device reset occurred Feb 13 20:01:35.748642 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Feb 13 20:01:35.748801 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 20:01:35.748822 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Feb 13 20:01:35.748154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:01:35.754261 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:01:35.759746 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 20:01:35.767239 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Feb 13 20:01:35.767359 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 20:01:35.767450 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Feb 13 20:01:35.767535 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Feb 13 20:01:35.767615 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Feb 13 20:01:35.767695 kernel: sd 0:0:0:1: Power-on or device reset occurred Feb 13 20:01:35.772195 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Feb 13 20:01:35.772315 kernel: sd 0:0:0:1: [sda] Write Protect is off Feb 13 20:01:35.772396 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Feb 13 20:01:35.772484 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:01:35.772566 kernel: hub 1-0:1.0: USB hub found Feb 13 20:01:35.772660 kernel: hub 1-0:1.0: 4 ports detected Feb 13 20:01:35.772738 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 20:01:35.772831 kernel: hub 2-0:1.0: USB hub found Feb 13 20:01:35.772923 kernel: hub 2-0:1.0: 4 ports detected Feb 13 20:01:35.773005 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:01:35.773016 kernel: GPT:17805311 != 80003071 Feb 13 20:01:35.775084 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:01:35.775114 kernel: GPT:17805311 != 80003071 Feb 13 20:01:35.775124 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:01:35.775134 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:01:35.775143 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Feb 13 20:01:35.787164 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:01:35.813058 kernel: BTRFS: device fsid 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (510) Feb 13 20:01:35.825611 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (513) Feb 13 20:01:35.824188 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Feb 13 20:01:35.841937 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Feb 13 20:01:35.846218 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Feb 13 20:01:35.846834 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Feb 13 20:01:35.854553 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 20:01:35.864297 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:01:35.872628 disk-uuid[573]: Primary Header is updated. Feb 13 20:01:35.872628 disk-uuid[573]: Secondary Entries is updated. Feb 13 20:01:35.872628 disk-uuid[573]: Secondary Header is updated. Feb 13 20:01:35.880139 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:01:35.886076 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:01:36.008079 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 20:01:36.251179 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Feb 13 20:01:36.386095 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Feb 13 20:01:36.386147 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Feb 13 20:01:36.387167 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Feb 13 20:01:36.442085 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Feb 13 20:01:36.442424 kernel: usbcore: registered new interface driver usbhid Feb 13 20:01:36.443272 kernel: usbhid: USB HID core driver Feb 13 20:01:36.894103 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:01:36.895829 disk-uuid[574]: The operation has completed successfully. Feb 13 20:01:36.945166 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:01:36.945281 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:01:36.957391 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:01:36.962781 sh[588]: Success Feb 13 20:01:36.977084 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 20:01:37.039359 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:01:37.041337 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:01:37.050218 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:01:37.064217 kernel: BTRFS info (device dm-0): first mount of filesystem 4bb2b262-8ef2-48e3-80f4-24f9d7a85bf6 Feb 13 20:01:37.064308 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:01:37.064346 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:01:37.064390 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:01:37.065175 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:01:37.071072 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:01:37.073347 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:01:37.075449 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:01:37.091450 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:01:37.097340 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:01:37.111582 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:01:37.111638 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:01:37.111658 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:01:37.115211 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:01:37.115270 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:01:37.126632 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:01:37.128107 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:01:37.133897 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:01:37.140332 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:01:37.218861 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:01:37.229313 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:01:37.235914 ignition[682]: Ignition 2.19.0 Feb 13 20:01:37.236552 ignition[682]: Stage: fetch-offline Feb 13 20:01:37.236605 ignition[682]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:01:37.236613 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:01:37.236784 ignition[682]: parsed url from cmdline: "" Feb 13 20:01:37.236787 ignition[682]: no config URL provided Feb 13 20:01:37.236791 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:01:37.240379 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:01:37.236798 ignition[682]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:01:37.236803 ignition[682]: failed to fetch config: resource requires networking Feb 13 20:01:37.237579 ignition[682]: Ignition finished successfully Feb 13 20:01:37.254370 systemd-networkd[774]: lo: Link UP Feb 13 20:01:37.254383 systemd-networkd[774]: lo: Gained carrier Feb 13 20:01:37.256358 systemd-networkd[774]: Enumeration completed Feb 13 20:01:37.256577 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:01:37.257887 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:01:37.257891 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:01:37.259840 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:01:37.259843 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:01:37.260866 systemd[1]: Reached target network.target - Network. Feb 13 20:01:37.262147 systemd-networkd[774]: eth0: Link UP Feb 13 20:01:37.262150 systemd-networkd[774]: eth0: Gained carrier Feb 13 20:01:37.262159 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:01:37.266308 systemd-networkd[774]: eth1: Link UP Feb 13 20:01:37.266311 systemd-networkd[774]: eth1: Gained carrier Feb 13 20:01:37.266320 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:01:37.268203 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:01:37.282699 ignition[777]: Ignition 2.19.0 Feb 13 20:01:37.282711 ignition[777]: Stage: fetch Feb 13 20:01:37.282902 ignition[777]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:01:37.282913 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:01:37.283008 ignition[777]: parsed url from cmdline: "" Feb 13 20:01:37.283011 ignition[777]: no config URL provided Feb 13 20:01:37.283016 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:01:37.283044 ignition[777]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:01:37.283110 ignition[777]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Feb 13 20:01:37.283776 ignition[777]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 20:01:37.294136 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:01:37.341147 systemd-networkd[774]: eth0: DHCPv4 address 49.13.3.212/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 20:01:37.484365 ignition[777]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Feb 13 20:01:37.488368 ignition[777]: GET result: OK Feb 13 20:01:37.488506 ignition[777]: parsing config with SHA512: f2eebff02438e355eed13248cedbaa2d013a8fa4ed95bc92bc050dba4b5b0894a6122875c19b05482009d3c1f9ef11476e88562a595122bfd4e524f73940c8fc Feb 13 20:01:37.493282 unknown[777]: fetched base config from "system" Feb 13 20:01:37.493303 unknown[777]: fetched base config from "system" Feb 13 20:01:37.493964 ignition[777]: fetch: fetch complete Feb 13 20:01:37.493309 unknown[777]: fetched user config from "hetzner" Feb 13 20:01:37.493971 ignition[777]: fetch: fetch passed Feb 13 20:01:37.495703 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:01:37.494067 ignition[777]: Ignition finished successfully Feb 13 20:01:37.504298 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:01:37.519068 ignition[784]: Ignition 2.19.0 Feb 13 20:01:37.519081 ignition[784]: Stage: kargs Feb 13 20:01:37.519311 ignition[784]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:01:37.519323 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:01:37.520569 ignition[784]: kargs: kargs passed Feb 13 20:01:37.523743 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:01:37.520635 ignition[784]: Ignition finished successfully Feb 13 20:01:37.531294 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:01:37.544195 ignition[791]: Ignition 2.19.0 Feb 13 20:01:37.544206 ignition[791]: Stage: disks Feb 13 20:01:37.544398 ignition[791]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:01:37.544408 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:01:37.547622 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:01:37.545467 ignition[791]: disks: disks passed Feb 13 20:01:37.548790 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:01:37.545524 ignition[791]: Ignition finished successfully Feb 13 20:01:37.549593 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:01:37.550723 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:01:37.551622 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:01:37.552841 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:01:37.560264 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:01:37.576720 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:01:37.581501 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:01:37.587345 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:01:37.628079 kernel: EXT4-fs (sda9): mounted filesystem 9957d679-c6c4-49f4-b1b2-c3c1f3ba5699 r/w with ordered data mode. Quota mode: none. Feb 13 20:01:37.629970 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:01:37.631966 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:01:37.640192 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:01:37.643190 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:01:37.647278 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:01:37.650274 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:01:37.651977 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:01:37.658539 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:01:37.662401 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (807) Feb 13 20:01:37.664704 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:01:37.664763 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:01:37.665666 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:01:37.665283 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:01:37.677760 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:01:37.677818 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:01:37.682840 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:01:37.708122 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:01:37.709808 coreos-metadata[809]: Feb 13 20:01:37.709 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Feb 13 20:01:37.711925 coreos-metadata[809]: Feb 13 20:01:37.711 INFO Fetch successful Feb 13 20:01:37.712568 coreos-metadata[809]: Feb 13 20:01:37.712 INFO wrote hostname ci-4081-3-1-1-94e317dfd2 to /sysroot/etc/hostname Feb 13 20:01:37.714711 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:01:37.718224 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:01:37.723429 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:01:37.728840 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:01:37.831611 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:01:37.837194 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:01:37.840262 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:01:37.852063 kernel: BTRFS info (device sda6): last unmount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:01:37.885040 ignition[924]: INFO : Ignition 2.19.0 Feb 13 20:01:37.885040 ignition[924]: INFO : Stage: mount Feb 13 20:01:37.887099 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:01:37.887099 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:01:37.887099 ignition[924]: INFO : mount: mount passed Feb 13 20:01:37.887213 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:01:37.889616 ignition[924]: INFO : Ignition finished successfully Feb 13 20:01:37.892110 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:01:37.899308 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:01:38.064334 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:01:38.069477 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:01:38.094109 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (937) Feb 13 20:01:38.095870 kernel: BTRFS info (device sda6): first mount of filesystem 896fb6d3-4143-43a6-a44b-ca1ce10817e1 Feb 13 20:01:38.095906 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 20:01:38.095917 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:01:38.100117 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:01:38.100178 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:01:38.104436 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:01:38.134669 ignition[954]: INFO : Ignition 2.19.0 Feb 13 20:01:38.134669 ignition[954]: INFO : Stage: files Feb 13 20:01:38.135701 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:01:38.135701 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:01:38.137210 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:01:38.137210 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:01:38.137210 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:01:38.140294 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:01:38.141300 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:01:38.142370 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:01:38.141688 unknown[954]: wrote ssh authorized keys file for user: core Feb 13 20:01:38.144088 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:01:38.144088 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 13 20:01:38.260414 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:01:38.594380 systemd-networkd[774]: eth0: Gained IPv6LL Feb 13 20:01:39.042593 systemd-networkd[774]: eth1: Gained IPv6LL Feb 13 20:01:39.335703 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 13 20:01:39.335703 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 20:01:39.335703 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 13 20:01:40.007529 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:01:40.308825 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 20:01:40.308825 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:01:40.308825 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:01:40.308825 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:01:40.308825 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:01:40.308825 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:01:40.308825 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:01:40.308825 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:01:40.308825 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:01:40.320100 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:01:40.320100 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:01:40.320100 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 20:01:40.320100 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 20:01:40.320100 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 20:01:40.320100 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Feb 13 20:01:40.932058 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:01:42.019368 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Feb 13 20:01:42.019368 ignition[954]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 20:01:42.021855 ignition[954]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:01:42.021855 ignition[954]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:01:42.021855 ignition[954]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 20:01:42.021855 ignition[954]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Feb 13 20:01:42.021855 ignition[954]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 20:01:42.021855 ignition[954]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Feb 13 20:01:42.021855 ignition[954]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Feb 13 20:01:42.021855 ignition[954]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:01:42.021855 ignition[954]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:01:42.021855 ignition[954]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:01:42.033664 ignition[954]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:01:42.033664 ignition[954]: INFO : files: files passed Feb 13 20:01:42.033664 ignition[954]: INFO : Ignition finished successfully Feb 13 20:01:42.026332 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:01:42.034247 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:01:42.037737 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:01:42.041961 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:01:42.042489 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:01:42.057161 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:01:42.057161 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:01:42.060550 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:01:42.063662 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:01:42.064863 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:01:42.072306 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:01:42.106371 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:01:42.106494 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:01:42.108686 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:01:42.111047 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:01:42.112454 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:01:42.117217 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:01:42.135108 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:01:42.143254 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:01:42.158241 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:01:42.159740 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:01:42.160551 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:01:42.161560 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:01:42.161688 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:01:42.163255 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:01:42.163860 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:01:42.165276 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:01:42.166552 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:01:42.167625 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:01:42.168698 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:01:42.169747 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:01:42.170968 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:01:42.171975 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:01:42.173112 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:01:42.174208 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:01:42.174340 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:01:42.175737 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:01:42.176458 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:01:42.178082 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:01:42.178159 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:01:42.179242 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:01:42.179359 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:01:42.180760 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:01:42.180872 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:01:42.182233 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:01:42.182357 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:01:42.183292 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:01:42.183392 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:01:42.188289 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:01:42.189259 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:01:42.189399 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:01:42.193252 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:01:42.193749 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:01:42.193861 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:01:42.194608 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:01:42.194699 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:01:42.206058 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:01:42.206189 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:01:42.213108 ignition[1006]: INFO : Ignition 2.19.0 Feb 13 20:01:42.213108 ignition[1006]: INFO : Stage: umount Feb 13 20:01:42.216907 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:01:42.216907 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Feb 13 20:01:42.219964 ignition[1006]: INFO : umount: umount passed Feb 13 20:01:42.219964 ignition[1006]: INFO : Ignition finished successfully Feb 13 20:01:42.218665 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:01:42.220853 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:01:42.221125 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:01:42.222832 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:01:42.222936 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:01:42.224488 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:01:42.224546 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:01:42.225465 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:01:42.225512 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:01:42.226374 systemd[1]: Stopped target network.target - Network. Feb 13 20:01:42.227193 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:01:42.227254 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:01:42.228230 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:01:42.229147 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:01:42.234152 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:01:42.236716 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:01:42.237263 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:01:42.238563 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:01:42.238617 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:01:42.239895 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:01:42.239930 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:01:42.241590 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:01:42.241660 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:01:42.242893 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:01:42.242964 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:01:42.244241 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:01:42.245324 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:01:42.246938 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:01:42.247101 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:01:42.248797 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:01:42.248903 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:01:42.250623 systemd-networkd[774]: eth0: DHCPv6 lease lost Feb 13 20:01:42.254108 systemd-networkd[774]: eth1: DHCPv6 lease lost Feb 13 20:01:42.255968 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:01:42.256792 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:01:42.258595 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:01:42.258836 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:01:42.265223 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:01:42.266941 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:01:42.267011 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:01:42.268090 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:01:42.271784 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:01:42.271912 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:01:42.280554 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:01:42.280675 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:01:42.282204 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:01:42.282269 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:01:42.283673 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:01:42.283725 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:01:42.285182 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:01:42.285336 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:01:42.286737 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:01:42.286778 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:01:42.287516 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:01:42.287549 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:01:42.288568 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:01:42.288623 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:01:42.290288 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:01:42.290334 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:01:42.291887 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:01:42.291944 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:01:42.297271 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:01:42.298283 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:01:42.298355 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:01:42.300306 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:01:42.300381 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:01:42.302520 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:01:42.303525 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:01:42.319301 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:01:42.319433 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:01:42.321904 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:01:42.329396 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:01:42.339540 systemd[1]: Switching root. Feb 13 20:01:42.374277 systemd-journald[236]: Journal stopped Feb 13 20:01:43.254322 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Feb 13 20:01:43.254396 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:01:43.254408 kernel: SELinux: policy capability open_perms=1 Feb 13 20:01:43.254421 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:01:43.254431 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:01:43.254440 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:01:43.254453 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:01:43.254471 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:01:43.254482 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:01:43.254493 kernel: audit: type=1403 audit(1739476902.518:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:01:43.254504 systemd[1]: Successfully loaded SELinux policy in 35.967ms. Feb 13 20:01:43.254527 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.892ms. Feb 13 20:01:43.254541 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:01:43.254556 systemd[1]: Detected virtualization kvm. Feb 13 20:01:43.254567 systemd[1]: Detected architecture arm64. Feb 13 20:01:43.254577 systemd[1]: Detected first boot. Feb 13 20:01:43.254588 systemd[1]: Hostname set to . Feb 13 20:01:43.254598 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:01:43.254609 zram_generator::config[1049]: No configuration found. Feb 13 20:01:43.254620 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:01:43.254632 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:01:43.254643 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:01:43.254653 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:01:43.254666 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:01:43.254676 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:01:43.254687 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:01:43.254697 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:01:43.254707 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:01:43.254719 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:01:43.254729 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:01:43.254740 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:01:43.254750 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:01:43.254761 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:01:43.254771 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:01:43.254781 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:01:43.254795 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:01:43.254806 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:01:43.254818 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 20:01:43.254828 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:01:43.254838 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:01:43.254849 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:01:43.254859 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:01:43.254870 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:01:43.254885 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:01:43.254896 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:01:43.254907 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:01:43.254917 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:01:43.254927 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:01:43.254938 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:01:43.254948 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:01:43.254959 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:01:43.254970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:01:43.254981 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:01:43.254993 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:01:43.255003 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:01:43.257197 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:01:43.257221 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:01:43.257232 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:01:43.257243 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:01:43.257260 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:01:43.257273 systemd[1]: Reached target machines.target - Containers. Feb 13 20:01:43.257284 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:01:43.257294 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:01:43.257305 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:01:43.257315 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:01:43.257325 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:01:43.257338 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:01:43.257348 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:01:43.257359 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:01:43.257369 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:01:43.257380 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:01:43.257391 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:01:43.257403 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:01:43.257416 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:01:43.257430 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:01:43.257443 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:01:43.257454 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:01:43.257465 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:01:43.257475 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:01:43.257485 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:01:43.257496 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:01:43.257506 systemd[1]: Stopped verity-setup.service. Feb 13 20:01:43.257518 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:01:43.257528 kernel: loop: module loaded Feb 13 20:01:43.257540 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:01:43.257550 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:01:43.257560 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:01:43.257570 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:01:43.257581 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:01:43.257593 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:01:43.257603 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:01:43.257614 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:01:43.257624 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:01:43.257635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:01:43.257648 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:01:43.257658 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:01:43.257669 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:01:43.257681 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:01:43.257695 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:01:43.257706 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:01:43.257716 kernel: fuse: init (API version 7.39) Feb 13 20:01:43.257726 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:01:43.257736 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:01:43.257780 systemd-journald[1111]: Collecting audit messages is disabled. Feb 13 20:01:43.257804 kernel: ACPI: bus type drm_connector registered Feb 13 20:01:43.257814 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:01:43.257825 systemd-journald[1111]: Journal started Feb 13 20:01:43.257846 systemd-journald[1111]: Runtime Journal (/run/log/journal/259b724ddb5c487a9f2363ae07432029) is 8.0M, max 76.6M, 68.6M free. Feb 13 20:01:42.991587 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:01:43.262224 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:01:43.262248 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:01:43.262260 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:01:43.010764 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:01:43.011495 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:01:43.267071 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:01:43.270243 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:01:43.272081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:01:43.279210 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:01:43.281068 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:01:43.290609 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:01:43.290670 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:01:43.298080 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:01:43.302165 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:01:43.306272 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:01:43.309177 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:01:43.309395 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:01:43.310798 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:01:43.312125 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:01:43.312917 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:01:43.315433 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:01:43.352918 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:01:43.362941 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:01:43.367247 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:01:43.379161 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:01:43.383254 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:01:43.402223 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:01:43.404073 kernel: loop0: detected capacity change from 0 to 114328 Feb 13 20:01:43.407070 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:01:43.415678 systemd-journald[1111]: Time spent on flushing to /var/log/journal/259b724ddb5c487a9f2363ae07432029 is 32.524ms for 1133 entries. Feb 13 20:01:43.415678 systemd-journald[1111]: System Journal (/var/log/journal/259b724ddb5c487a9f2363ae07432029) is 8.0M, max 584.8M, 576.8M free. Feb 13 20:01:43.461382 systemd-journald[1111]: Received client request to flush runtime journal. Feb 13 20:01:43.414809 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:01:43.419477 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:01:43.436547 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:01:43.439976 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:01:43.442160 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:01:43.447983 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:01:43.467203 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:01:43.467847 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:01:43.485981 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:01:43.496339 kernel: loop1: detected capacity change from 0 to 8 Feb 13 20:01:43.519060 kernel: loop2: detected capacity change from 0 to 114432 Feb 13 20:01:43.532602 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:01:43.546429 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:01:43.561067 kernel: loop3: detected capacity change from 0 to 189592 Feb 13 20:01:43.584291 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Feb 13 20:01:43.584318 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Feb 13 20:01:43.589787 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:01:43.611099 kernel: loop4: detected capacity change from 0 to 114328 Feb 13 20:01:43.636063 kernel: loop5: detected capacity change from 0 to 8 Feb 13 20:01:43.640514 kernel: loop6: detected capacity change from 0 to 114432 Feb 13 20:01:43.660055 kernel: loop7: detected capacity change from 0 to 189592 Feb 13 20:01:43.682978 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Feb 13 20:01:43.684403 (sd-merge)[1190]: Merged extensions into '/usr'. Feb 13 20:01:43.693390 systemd[1]: Reloading requested from client PID 1141 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:01:43.693506 systemd[1]: Reloading... Feb 13 20:01:43.814048 zram_generator::config[1216]: No configuration found. Feb 13 20:01:43.849071 ldconfig[1133]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:01:43.948252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:01:43.995481 systemd[1]: Reloading finished in 301 ms. Feb 13 20:01:44.016800 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:01:44.018277 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:01:44.029706 systemd[1]: Starting ensure-sysext.service... Feb 13 20:01:44.034375 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:01:44.036457 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:01:44.042246 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:01:44.047265 systemd[1]: Reloading requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:01:44.047292 systemd[1]: Reloading... Feb 13 20:01:44.056484 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:01:44.056966 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:01:44.057741 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:01:44.057959 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Feb 13 20:01:44.058018 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Feb 13 20:01:44.062205 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:01:44.062362 systemd-tmpfiles[1254]: Skipping /boot Feb 13 20:01:44.071449 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:01:44.071581 systemd-tmpfiles[1254]: Skipping /boot Feb 13 20:01:44.100257 systemd-udevd[1256]: Using default interface naming scheme 'v255'. Feb 13 20:01:44.141904 zram_generator::config[1288]: No configuration found. Feb 13 20:01:44.298969 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:01:44.334064 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:01:44.363070 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1302) Feb 13 20:01:44.379266 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 20:01:44.379626 systemd[1]: Reloading finished in 332 ms. Feb 13 20:01:44.396290 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:01:44.411103 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:01:44.452325 systemd[1]: Finished ensure-sysext.service. Feb 13 20:01:44.470970 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Feb 13 20:01:44.485051 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Feb 13 20:01:44.485123 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:01:44.485136 kernel: [drm] features: -context_init Feb 13 20:01:44.486065 kernel: [drm] number of scanouts: 1 Feb 13 20:01:44.486260 kernel: [drm] number of cap sets: 0 Feb 13 20:01:44.488346 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:01:44.492270 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Feb 13 20:01:44.493250 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:01:44.497376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:01:44.500176 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:01:44.503320 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 20:01:44.511523 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:01:44.518358 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:01:44.523088 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:01:44.532278 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:01:44.533863 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:01:44.536247 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:01:44.547244 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:01:44.554292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:01:44.559195 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:01:44.564240 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:01:44.565949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:01:44.567363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:01:44.568575 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:01:44.570372 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:01:44.571945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:01:44.572806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:01:44.573781 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:01:44.573899 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:01:44.581925 augenrules[1389]: No rules Feb 13 20:01:44.583960 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:01:44.595034 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Feb 13 20:01:44.598175 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:01:44.608615 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:01:44.609603 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:01:44.609822 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:01:44.617297 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:01:44.622316 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:01:44.627336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:01:44.633060 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:01:44.649675 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:01:44.656442 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:01:44.657925 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:01:44.665364 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:01:44.676237 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:01:44.677657 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:01:44.688432 lvm[1409]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:01:44.701223 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:01:44.709684 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:01:44.715304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:01:44.731242 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:01:44.742047 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:01:44.768248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:01:44.778038 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:01:44.791187 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:01:44.792011 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:01:44.793143 systemd-networkd[1380]: lo: Link UP Feb 13 20:01:44.794129 systemd-networkd[1380]: lo: Gained carrier Feb 13 20:01:44.795736 systemd-timesyncd[1386]: No network connectivity, watching for changes. Feb 13 20:01:44.795750 systemd-networkd[1380]: Enumeration completed Feb 13 20:01:44.795815 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:01:44.799369 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:01:44.799460 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:01:44.800403 systemd-networkd[1380]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:01:44.801133 systemd-networkd[1380]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:01:44.801839 systemd-networkd[1380]: eth0: Link UP Feb 13 20:01:44.803094 systemd-networkd[1380]: eth0: Gained carrier Feb 13 20:01:44.803119 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:01:44.804230 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:01:44.806536 systemd-networkd[1380]: eth1: Link UP Feb 13 20:01:44.806544 systemd-networkd[1380]: eth1: Gained carrier Feb 13 20:01:44.806563 systemd-networkd[1380]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:01:44.820767 systemd-resolved[1384]: Positive Trust Anchors: Feb 13 20:01:44.820792 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:01:44.820824 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:01:44.825437 systemd-resolved[1384]: Using system hostname 'ci-4081-3-1-1-94e317dfd2'. Feb 13 20:01:44.827343 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:01:44.828541 systemd[1]: Reached target network.target - Network. Feb 13 20:01:44.829263 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:01:44.830175 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:01:44.831072 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:01:44.831981 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:01:44.833106 systemd-networkd[1380]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 20:01:44.833151 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:01:44.833904 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Feb 13 20:01:44.834383 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:01:44.835449 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:01:44.836134 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:01:44.836166 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:01:44.836610 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:01:44.838235 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:01:44.840207 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:01:44.849604 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:01:44.851285 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:01:44.852121 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:01:44.852663 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:01:44.853276 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:01:44.853315 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:01:44.854761 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:01:44.857219 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:01:44.859223 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:01:44.862205 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:01:44.862286 systemd-networkd[1380]: eth0: DHCPv4 address 49.13.3.212/32, gateway 172.31.1.1 acquired from 172.31.1.1 Feb 13 20:01:44.866358 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:01:44.866919 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:01:44.868596 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Feb 13 20:01:44.874210 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:01:44.877597 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:01:44.882244 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Feb 13 20:01:44.887254 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:01:44.892247 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:01:44.898097 jq[1434]: false Feb 13 20:01:44.902282 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:01:44.903670 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:01:44.904243 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:01:44.905206 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:01:44.909258 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:01:44.920551 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:01:44.920751 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:01:44.923750 extend-filesystems[1435]: Found loop4 Feb 13 20:01:44.925364 extend-filesystems[1435]: Found loop5 Feb 13 20:01:44.925364 extend-filesystems[1435]: Found loop6 Feb 13 20:01:44.925364 extend-filesystems[1435]: Found loop7 Feb 13 20:01:44.925364 extend-filesystems[1435]: Found sda Feb 13 20:01:44.925364 extend-filesystems[1435]: Found sda1 Feb 13 20:01:44.925364 extend-filesystems[1435]: Found sda2 Feb 13 20:01:44.925364 extend-filesystems[1435]: Found sda3 Feb 13 20:01:44.925364 extend-filesystems[1435]: Found usr Feb 13 20:01:44.925364 extend-filesystems[1435]: Found sda4 Feb 13 20:01:44.925364 extend-filesystems[1435]: Found sda6 Feb 13 20:01:44.925364 extend-filesystems[1435]: Found sda7 Feb 13 20:01:44.925364 extend-filesystems[1435]: Found sda9 Feb 13 20:01:44.925364 extend-filesystems[1435]: Checking size of /dev/sda9 Feb 13 20:01:44.947215 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:01:44.947913 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:01:44.959565 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:01:44.959750 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:01:44.964447 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:01:44.969513 dbus-daemon[1433]: [system] SELinux support is enabled Feb 13 20:01:44.969706 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:01:44.973559 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:01:44.973595 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:01:44.975154 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:01:44.975181 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:01:44.978074 extend-filesystems[1435]: Resized partition /dev/sda9 Feb 13 20:01:44.984795 jq[1447]: true Feb 13 20:01:44.989057 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:01:45.001384 tar[1451]: linux-arm64/helm Feb 13 20:01:45.007324 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Feb 13 20:01:45.012166 jq[1474]: true Feb 13 20:01:45.022281 coreos-metadata[1432]: Feb 13 20:01:45.022 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Feb 13 20:01:45.027597 coreos-metadata[1432]: Feb 13 20:01:45.027 INFO Fetch successful Feb 13 20:01:45.027597 coreos-metadata[1432]: Feb 13 20:01:45.027 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Feb 13 20:01:45.028274 coreos-metadata[1432]: Feb 13 20:01:45.028 INFO Fetch successful Feb 13 20:01:45.055723 update_engine[1446]: I20250213 20:01:45.055004 1446 main.cc:92] Flatcar Update Engine starting Feb 13 20:01:45.072491 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:01:45.073520 update_engine[1446]: I20250213 20:01:45.073277 1446 update_check_scheduler.cc:74] Next update check in 9m16s Feb 13 20:01:45.075309 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:01:45.130803 systemd-logind[1443]: New seat seat0. Feb 13 20:01:45.135493 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 20:01:45.135523 systemd-logind[1443]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Feb 13 20:01:45.135745 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:01:45.176367 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:01:45.180598 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:01:45.184122 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:01:45.186097 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:01:45.193475 systemd[1]: Starting sshkeys.service... Feb 13 20:01:45.210129 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1299) Feb 13 20:01:45.210198 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Feb 13 20:01:45.223517 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:01:45.235704 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:01:45.245045 extend-filesystems[1473]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 20:01:45.245045 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 5 Feb 13 20:01:45.245045 extend-filesystems[1473]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Feb 13 20:01:45.250567 extend-filesystems[1435]: Resized filesystem in /dev/sda9 Feb 13 20:01:45.250567 extend-filesystems[1435]: Found sr0 Feb 13 20:01:45.250393 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:01:45.250570 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:01:45.292883 coreos-metadata[1510]: Feb 13 20:01:45.291 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Feb 13 20:01:45.293258 coreos-metadata[1510]: Feb 13 20:01:45.293 INFO Fetch successful Feb 13 20:01:45.300307 unknown[1510]: wrote ssh authorized keys file for user: core Feb 13 20:01:45.347766 update-ssh-keys[1516]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:01:45.350593 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:01:45.355390 systemd[1]: Finished sshkeys.service. Feb 13 20:01:45.366460 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:01:45.374048 containerd[1465]: time="2025-02-13T20:01:45.373365760Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:01:45.447302 containerd[1465]: time="2025-02-13T20:01:45.447048800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:01:45.451182 containerd[1465]: time="2025-02-13T20:01:45.451102680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:01:45.455395 containerd[1465]: time="2025-02-13T20:01:45.455347160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:01:45.455457 containerd[1465]: time="2025-02-13T20:01:45.455412840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:01:45.455624 containerd[1465]: time="2025-02-13T20:01:45.455598840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:01:45.455648 containerd[1465]: time="2025-02-13T20:01:45.455630440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:01:45.455714 containerd[1465]: time="2025-02-13T20:01:45.455693320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:01:45.455757 containerd[1465]: time="2025-02-13T20:01:45.455712080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:01:45.455949 containerd[1465]: time="2025-02-13T20:01:45.455922920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:01:45.455977 containerd[1465]: time="2025-02-13T20:01:45.455949680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:01:45.455977 containerd[1465]: time="2025-02-13T20:01:45.455969840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:01:45.459410 containerd[1465]: time="2025-02-13T20:01:45.455984120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:01:45.459410 containerd[1465]: time="2025-02-13T20:01:45.458628440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:01:45.459410 containerd[1465]: time="2025-02-13T20:01:45.458862400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:01:45.459410 containerd[1465]: time="2025-02-13T20:01:45.459083160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:01:45.459410 containerd[1465]: time="2025-02-13T20:01:45.459105240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:01:45.459410 containerd[1465]: time="2025-02-13T20:01:45.459210840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:01:45.459410 containerd[1465]: time="2025-02-13T20:01:45.459258280Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.466475400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.466536800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.466553560Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.466570280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.466586000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.466747280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.466984560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.467164760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.467185720Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.467199000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.467213520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.467226320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.467239120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:01:45.468050 containerd[1465]: time="2025-02-13T20:01:45.467253000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467270160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467283360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467296520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467308240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467327640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467340920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467359840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467373080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467384720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467397600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467410000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467427160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467440800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468354 containerd[1465]: time="2025-02-13T20:01:45.467455720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467467840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467480560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467492640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467509480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467535000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467547120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467557880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467671520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467691600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467702240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467713800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467722920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467734560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:01:45.468577 containerd[1465]: time="2025-02-13T20:01:45.467744000Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:01:45.468796 containerd[1465]: time="2025-02-13T20:01:45.467754520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:01:45.475972 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.474189120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.474270760Z" level=info msg="Connect containerd service" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.474330880Z" level=info msg="using legacy CRI server" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.474339720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.474459920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.475229440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.475504080Z" level=info msg="Start subscribing containerd event" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.475565560Z" level=info msg="Start recovering state" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.475652160Z" level=info msg="Start event monitor" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.475665120Z" level=info msg="Start snapshots syncer" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.475676600Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.475685000Z" level=info msg="Start streaming server" Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.475749560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.475791520Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:01:45.476164 containerd[1465]: time="2025-02-13T20:01:45.475847600Z" level=info msg="containerd successfully booted in 0.105074s" Feb 13 20:01:45.660795 tar[1451]: linux-arm64/LICENSE Feb 13 20:01:45.660882 tar[1451]: linux-arm64/README.md Feb 13 20:01:45.676531 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:01:45.795174 sshd_keygen[1478]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:01:45.819115 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:01:45.823405 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:01:45.834967 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:01:45.835353 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:01:45.842556 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:01:45.855633 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:01:45.871624 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:01:45.875433 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 20:01:45.876429 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:01:46.082259 systemd-networkd[1380]: eth1: Gained IPv6LL Feb 13 20:01:46.086398 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Feb 13 20:01:46.089865 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:01:46.091754 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:01:46.098447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:01:46.102361 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:01:46.129830 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:01:46.658436 systemd-networkd[1380]: eth0: Gained IPv6LL Feb 13 20:01:46.659785 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Feb 13 20:01:46.842646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:01:46.843953 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:01:46.844913 (kubelet)[1564]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:01:46.849131 systemd[1]: Startup finished in 764ms (kernel) + 7.828s (initrd) + 4.366s (userspace) = 12.959s. Feb 13 20:01:47.384867 kubelet[1564]: E0213 20:01:47.384752 1564 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:01:47.386567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:01:47.386716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:01:57.638430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:01:57.645288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:01:57.759398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:01:57.759658 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:01:57.817684 kubelet[1583]: E0213 20:01:57.817617 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:01:57.820529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:01:57.820665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:02:07.851161 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:02:07.868393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:02:07.983241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:02:07.986012 (kubelet)[1599]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:02:08.028249 kubelet[1599]: E0213 20:02:08.028178 1599 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:02:08.031449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:02:08.031638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:02:16.863985 systemd-timesyncd[1386]: Contacted time server 129.250.35.251:123 (2.flatcar.pool.ntp.org). Feb 13 20:02:16.864134 systemd-timesyncd[1386]: Initial clock synchronization to Thu 2025-02-13 20:02:17.256725 UTC. Feb 13 20:02:18.105309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 20:02:18.118428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:02:18.248203 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:02:18.266006 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:02:18.310767 kubelet[1614]: E0213 20:02:18.310648 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:02:18.313944 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:02:18.314150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:02:28.350487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 20:02:28.356294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:02:28.483299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:02:28.485280 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:02:28.528218 kubelet[1629]: E0213 20:02:28.528156 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:02:28.531014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:02:28.531247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:02:29.899771 update_engine[1446]: I20250213 20:02:29.899588 1446 update_attempter.cc:509] Updating boot flags... Feb 13 20:02:29.957114 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1644) Feb 13 20:02:29.995303 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1640) Feb 13 20:02:38.600299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Feb 13 20:02:38.610321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:02:38.713276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:02:38.727631 (kubelet)[1661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:02:38.773786 kubelet[1661]: E0213 20:02:38.773734 1661 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:02:38.778495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:02:38.778750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:02:48.850819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Feb 13 20:02:48.858385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:02:48.976839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:02:48.992684 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:02:49.037807 kubelet[1677]: E0213 20:02:49.037711 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:02:49.041214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:02:49.041556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:02:59.100616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Feb 13 20:02:59.106300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:02:59.232615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:02:59.239196 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:02:59.284764 kubelet[1692]: E0213 20:02:59.284695 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:02:59.287557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:02:59.287774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:03:09.350774 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Feb 13 20:03:09.360308 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:03:09.480179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:03:09.502885 (kubelet)[1706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:03:09.546623 kubelet[1706]: E0213 20:03:09.546547 1706 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:03:09.550289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:03:09.550488 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:03:19.600773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Feb 13 20:03:19.619347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:03:19.737931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:03:19.743103 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:03:19.790186 kubelet[1721]: E0213 20:03:19.790117 1721 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:03:19.793125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:03:19.793317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:03:29.850995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Feb 13 20:03:29.859398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:03:29.970015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:03:29.981568 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:03:30.019452 kubelet[1736]: E0213 20:03:30.019395 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:03:30.022017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:03:30.022353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:03:35.640863 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:03:35.643061 systemd[1]: Started sshd@0-49.13.3.212:22-147.75.109.163:59868.service - OpenSSH per-connection server daemon (147.75.109.163:59868). Feb 13 20:03:36.634540 sshd[1744]: Accepted publickey for core from 147.75.109.163 port 59868 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:03:36.637912 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:36.651941 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:03:36.652436 systemd-logind[1443]: New session 1 of user core. Feb 13 20:03:36.659477 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:03:36.672663 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:03:36.679604 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:03:36.684602 (systemd)[1748]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:03:36.793385 systemd[1748]: Queued start job for default target default.target. Feb 13 20:03:36.804540 systemd[1748]: Created slice app.slice - User Application Slice. Feb 13 20:03:36.804581 systemd[1748]: Reached target paths.target - Paths. Feb 13 20:03:36.804595 systemd[1748]: Reached target timers.target - Timers. Feb 13 20:03:36.806292 systemd[1748]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:03:36.821881 systemd[1748]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:03:36.822067 systemd[1748]: Reached target sockets.target - Sockets. Feb 13 20:03:36.822089 systemd[1748]: Reached target basic.target - Basic System. Feb 13 20:03:36.822157 systemd[1748]: Reached target default.target - Main User Target. Feb 13 20:03:36.822201 systemd[1748]: Startup finished in 130ms. Feb 13 20:03:36.822634 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:03:36.834441 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:03:37.535434 systemd[1]: Started sshd@1-49.13.3.212:22-147.75.109.163:59880.service - OpenSSH per-connection server daemon (147.75.109.163:59880). Feb 13 20:03:38.528885 sshd[1759]: Accepted publickey for core from 147.75.109.163 port 59880 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:03:38.531255 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:38.537673 systemd-logind[1443]: New session 2 of user core. Feb 13 20:03:38.544457 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:03:39.220436 sshd[1759]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:39.225250 systemd[1]: sshd@1-49.13.3.212:22-147.75.109.163:59880.service: Deactivated successfully. Feb 13 20:03:39.226932 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:03:39.229185 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:03:39.230680 systemd-logind[1443]: Removed session 2. Feb 13 20:03:39.397634 systemd[1]: Started sshd@2-49.13.3.212:22-147.75.109.163:59884.service - OpenSSH per-connection server daemon (147.75.109.163:59884). Feb 13 20:03:40.100609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Feb 13 20:03:40.115408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:03:40.228108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:03:40.233330 (kubelet)[1776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:03:40.275644 kubelet[1776]: E0213 20:03:40.275591 1776 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:03:40.279433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:03:40.279585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:03:40.372388 sshd[1766]: Accepted publickey for core from 147.75.109.163 port 59884 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:03:40.373813 sshd[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:40.379222 systemd-logind[1443]: New session 3 of user core. Feb 13 20:03:40.387305 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:03:41.046565 sshd[1766]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:41.050946 systemd[1]: sshd@2-49.13.3.212:22-147.75.109.163:59884.service: Deactivated successfully. Feb 13 20:03:41.052797 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:03:41.054468 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:03:41.055946 systemd-logind[1443]: Removed session 3. Feb 13 20:03:41.231576 systemd[1]: Started sshd@3-49.13.3.212:22-147.75.109.163:36520.service - OpenSSH per-connection server daemon (147.75.109.163:36520). Feb 13 20:03:42.213613 sshd[1788]: Accepted publickey for core from 147.75.109.163 port 36520 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:03:42.216602 sshd[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:42.222611 systemd-logind[1443]: New session 4 of user core. Feb 13 20:03:42.233376 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:03:42.899209 sshd[1788]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:42.904632 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:03:42.904706 systemd[1]: sshd@3-49.13.3.212:22-147.75.109.163:36520.service: Deactivated successfully. Feb 13 20:03:42.907646 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:03:42.910421 systemd-logind[1443]: Removed session 4. Feb 13 20:03:43.074433 systemd[1]: Started sshd@4-49.13.3.212:22-147.75.109.163:36534.service - OpenSSH per-connection server daemon (147.75.109.163:36534). Feb 13 20:03:44.053122 sshd[1795]: Accepted publickey for core from 147.75.109.163 port 36534 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:03:44.055585 sshd[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:44.061695 systemd-logind[1443]: New session 5 of user core. Feb 13 20:03:44.072351 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:03:44.588154 sudo[1798]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:03:44.588643 sudo[1798]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:03:44.606279 sudo[1798]: pam_unix(sudo:session): session closed for user root Feb 13 20:03:44.766425 sshd[1795]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:44.771184 systemd[1]: sshd@4-49.13.3.212:22-147.75.109.163:36534.service: Deactivated successfully. Feb 13 20:03:44.772992 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:03:44.775504 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:03:44.776883 systemd-logind[1443]: Removed session 5. Feb 13 20:03:44.941346 systemd[1]: Started sshd@5-49.13.3.212:22-147.75.109.163:36548.service - OpenSSH per-connection server daemon (147.75.109.163:36548). Feb 13 20:03:45.932958 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 36548 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:03:45.935655 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:45.942149 systemd-logind[1443]: New session 6 of user core. Feb 13 20:03:45.951363 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:03:46.456402 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:03:46.456827 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:03:46.462342 sudo[1807]: pam_unix(sudo:session): session closed for user root Feb 13 20:03:46.467841 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:03:46.468263 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:03:46.482523 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:03:46.494011 auditctl[1810]: No rules Feb 13 20:03:46.494991 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:03:46.495220 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:03:46.506278 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:03:46.535818 augenrules[1828]: No rules Feb 13 20:03:46.537551 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:03:46.539427 sudo[1806]: pam_unix(sudo:session): session closed for user root Feb 13 20:03:46.700298 sshd[1803]: pam_unix(sshd:session): session closed for user core Feb 13 20:03:46.704444 systemd[1]: sshd@5-49.13.3.212:22-147.75.109.163:36548.service: Deactivated successfully. Feb 13 20:03:46.707061 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:03:46.710054 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:03:46.711234 systemd-logind[1443]: Removed session 6. Feb 13 20:03:46.876467 systemd[1]: Started sshd@6-49.13.3.212:22-147.75.109.163:36560.service - OpenSSH per-connection server daemon (147.75.109.163:36560). Feb 13 20:03:47.845635 sshd[1836]: Accepted publickey for core from 147.75.109.163 port 36560 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:03:47.847793 sshd[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:03:47.853638 systemd-logind[1443]: New session 7 of user core. Feb 13 20:03:47.862381 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:03:48.364142 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:03:48.364497 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:03:48.664342 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:03:48.664510 (dockerd)[1854]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:03:48.916190 dockerd[1854]: time="2025-02-13T20:03:48.915214248Z" level=info msg="Starting up" Feb 13 20:03:49.004345 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport481240493-merged.mount: Deactivated successfully. Feb 13 20:03:49.029058 dockerd[1854]: time="2025-02-13T20:03:49.028975258Z" level=info msg="Loading containers: start." Feb 13 20:03:49.130052 kernel: Initializing XFRM netlink socket Feb 13 20:03:49.223768 systemd-networkd[1380]: docker0: Link UP Feb 13 20:03:49.243883 dockerd[1854]: time="2025-02-13T20:03:49.243833593Z" level=info msg="Loading containers: done." Feb 13 20:03:49.264360 dockerd[1854]: time="2025-02-13T20:03:49.264281528Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:03:49.264621 dockerd[1854]: time="2025-02-13T20:03:49.264398383Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:03:49.264621 dockerd[1854]: time="2025-02-13T20:03:49.264541842Z" level=info msg="Daemon has completed initialization" Feb 13 20:03:49.311761 dockerd[1854]: time="2025-02-13T20:03:49.310661330Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:03:49.311413 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:03:49.997655 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2470819437-merged.mount: Deactivated successfully. Feb 13 20:03:50.350275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Feb 13 20:03:50.367445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:03:50.417555 containerd[1465]: time="2025-02-13T20:03:50.417373056Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 20:03:50.488258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:03:50.489544 (kubelet)[2000]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:03:50.538974 kubelet[2000]: E0213 20:03:50.538922 2000 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:03:50.541714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:03:50.541939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:03:51.144985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount930418724.mount: Deactivated successfully. Feb 13 20:03:53.400331 containerd[1465]: time="2025-02-13T20:03:53.400268417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:53.401714 containerd[1465]: time="2025-02-13T20:03:53.401652557Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=25620467" Feb 13 20:03:53.402747 containerd[1465]: time="2025-02-13T20:03:53.402678341Z" level=info msg="ImageCreate event name:\"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:53.407184 containerd[1465]: time="2025-02-13T20:03:53.407103627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:53.409070 containerd[1465]: time="2025-02-13T20:03:53.408786157Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"25617175\" in 2.991366535s" Feb 13 20:03:53.409070 containerd[1465]: time="2025-02-13T20:03:53.408844363Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:873e20495ccf3b2111d7cfe509e724c7bdee53e5b192c926f15beb8e2a71fc8d\"" Feb 13 20:03:53.409893 containerd[1465]: time="2025-02-13T20:03:53.409855505Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 20:03:55.516632 containerd[1465]: time="2025-02-13T20:03:55.516539330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:55.518050 containerd[1465]: time="2025-02-13T20:03:55.517986296Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=22471793" Feb 13 20:03:55.519473 containerd[1465]: time="2025-02-13T20:03:55.519380989Z" level=info msg="ImageCreate event name:\"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:55.524084 containerd[1465]: time="2025-02-13T20:03:55.524007330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:55.525641 containerd[1465]: time="2025-02-13T20:03:55.525504169Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"23875502\" in 2.115602343s" Feb 13 20:03:55.525641 containerd[1465]: time="2025-02-13T20:03:55.525550043Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:389ff6452ae41e3e5a43db694d848bf66adb834513164d04c90e8a52f7fb17e0\"" Feb 13 20:03:55.526503 containerd[1465]: time="2025-02-13T20:03:55.526403329Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 20:03:57.140096 containerd[1465]: time="2025-02-13T20:03:57.139831190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:57.141514 containerd[1465]: time="2025-02-13T20:03:57.141445635Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=17024560" Feb 13 20:03:57.143336 containerd[1465]: time="2025-02-13T20:03:57.143274895Z" level=info msg="ImageCreate event name:\"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:57.146851 containerd[1465]: time="2025-02-13T20:03:57.146789912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:57.149241 containerd[1465]: time="2025-02-13T20:03:57.148439633Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"18428287\" in 1.62199931s" Feb 13 20:03:57.149241 containerd[1465]: time="2025-02-13T20:03:57.148485548Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:e0b799edb30ee638812cfdec1befcd2728c87f3344cb0c00121ba1284e6c9f19\"" Feb 13 20:03:57.149585 containerd[1465]: time="2025-02-13T20:03:57.149558498Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 20:03:58.138395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307017970.mount: Deactivated successfully. Feb 13 20:03:58.756226 containerd[1465]: time="2025-02-13T20:03:58.756181281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:58.758275 containerd[1465]: time="2025-02-13T20:03:58.758235367Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769282" Feb 13 20:03:58.759351 containerd[1465]: time="2025-02-13T20:03:58.759284968Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:58.761993 containerd[1465]: time="2025-02-13T20:03:58.761924227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:03:58.763169 containerd[1465]: time="2025-02-13T20:03:58.763119530Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.613426808s" Feb 13 20:03:58.763265 containerd[1465]: time="2025-02-13T20:03:58.763173044Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"" Feb 13 20:03:58.763849 containerd[1465]: time="2025-02-13T20:03:58.763814611Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:03:59.366388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2650776376.mount: Deactivated successfully. Feb 13 20:04:00.072114 containerd[1465]: time="2025-02-13T20:04:00.071652143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:00.074073 containerd[1465]: time="2025-02-13T20:04:00.073782446Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Feb 13 20:04:00.075915 containerd[1465]: time="2025-02-13T20:04:00.075832558Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:00.079829 containerd[1465]: time="2025-02-13T20:04:00.079306924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:00.080630 containerd[1465]: time="2025-02-13T20:04:00.080582075Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.316643278s" Feb 13 20:04:00.080630 containerd[1465]: time="2025-02-13T20:04:00.080626910Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Feb 13 20:04:00.081738 containerd[1465]: time="2025-02-13T20:04:00.081711120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:04:00.600742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Feb 13 20:04:00.608480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:04:00.707255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3649906041.mount: Deactivated successfully. Feb 13 20:04:00.717018 containerd[1465]: time="2025-02-13T20:04:00.716950498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:00.718147 containerd[1465]: time="2025-02-13T20:04:00.717947716Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Feb 13 20:04:00.719542 containerd[1465]: time="2025-02-13T20:04:00.719290619Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:00.722013 containerd[1465]: time="2025-02-13T20:04:00.721942670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:00.723061 containerd[1465]: time="2025-02-13T20:04:00.722838019Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 640.971515ms" Feb 13 20:04:00.723061 containerd[1465]: time="2025-02-13T20:04:00.722876855Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Feb 13 20:04:00.723946 containerd[1465]: time="2025-02-13T20:04:00.723808720Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 20:04:00.728081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:04:00.734358 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:04:00.777546 kubelet[2132]: E0213 20:04:00.777483 2132 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:04:00.780268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:04:00.780419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:04:01.360763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1462724493.mount: Deactivated successfully. Feb 13 20:04:03.721571 containerd[1465]: time="2025-02-13T20:04:03.721505560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:03.723998 containerd[1465]: time="2025-02-13T20:04:03.723946913Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406487" Feb 13 20:04:03.725124 containerd[1465]: time="2025-02-13T20:04:03.724971466Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:03.729293 containerd[1465]: time="2025-02-13T20:04:03.729137753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:03.733100 containerd[1465]: time="2025-02-13T20:04:03.731895719Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.008047724s" Feb 13 20:04:03.733100 containerd[1465]: time="2025-02-13T20:04:03.732018309Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Feb 13 20:04:09.443240 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:04:09.457930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:04:09.490324 systemd[1]: Reloading requested from client PID 2220 ('systemctl') (unit session-7.scope)... Feb 13 20:04:09.490348 systemd[1]: Reloading... Feb 13 20:04:09.622063 zram_generator::config[2263]: No configuration found. Feb 13 20:04:09.726858 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:04:09.796929 systemd[1]: Reloading finished in 306 ms. Feb 13 20:04:09.855433 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:04:09.855577 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:04:09.855938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:04:09.864502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:04:09.991373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:04:10.001717 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:04:10.048744 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:04:10.048744 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:04:10.048744 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:04:10.049179 kubelet[2308]: I0213 20:04:10.049066 2308 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:04:11.434495 kubelet[2308]: I0213 20:04:11.434449 2308 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:04:11.437054 kubelet[2308]: I0213 20:04:11.435053 2308 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:04:11.437054 kubelet[2308]: I0213 20:04:11.435490 2308 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:04:11.464316 kubelet[2308]: E0213 20:04:11.464268 2308 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://49.13.3.212:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.3.212:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:04:11.465449 kubelet[2308]: I0213 20:04:11.465405 2308 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:04:11.481469 kubelet[2308]: E0213 20:04:11.481428 2308 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:04:11.481648 kubelet[2308]: I0213 20:04:11.481632 2308 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:04:11.486116 kubelet[2308]: I0213 20:04:11.486081 2308 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:04:11.487471 kubelet[2308]: I0213 20:04:11.487440 2308 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:04:11.487805 kubelet[2308]: I0213 20:04:11.487770 2308 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:04:11.488112 kubelet[2308]: I0213 20:04:11.487888 2308 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-1-94e317dfd2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:04:11.488425 kubelet[2308]: I0213 20:04:11.488407 2308 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:04:11.488497 kubelet[2308]: I0213 20:04:11.488488 2308 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:04:11.488766 kubelet[2308]: I0213 20:04:11.488751 2308 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:04:11.491057 kubelet[2308]: I0213 20:04:11.491023 2308 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:04:11.491189 kubelet[2308]: I0213 20:04:11.491177 2308 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:04:11.491308 kubelet[2308]: I0213 20:04:11.491298 2308 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:04:11.491413 kubelet[2308]: I0213 20:04:11.491400 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:04:11.498378 kubelet[2308]: W0213 20:04:11.498281 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.3.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-1-94e317dfd2&limit=500&resourceVersion=0": dial tcp 49.13.3.212:6443: connect: connection refused Feb 13 20:04:11.498489 kubelet[2308]: E0213 20:04:11.498395 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.3.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-1-94e317dfd2&limit=500&resourceVersion=0\": dial tcp 49.13.3.212:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:04:11.498945 kubelet[2308]: W0213 20:04:11.498885 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.3.212:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.3.212:6443: connect: connection refused Feb 13 20:04:11.499003 kubelet[2308]: E0213 20:04:11.498946 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.3.212:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.3.212:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:04:11.499411 kubelet[2308]: I0213 20:04:11.499379 2308 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:04:11.501624 kubelet[2308]: I0213 20:04:11.501602 2308 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:04:11.502574 kubelet[2308]: W0213 20:04:11.502536 2308 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:04:11.503406 kubelet[2308]: I0213 20:04:11.503369 2308 server.go:1269] "Started kubelet" Feb 13 20:04:11.504952 kubelet[2308]: I0213 20:04:11.504910 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:04:11.505929 kubelet[2308]: I0213 20:04:11.505868 2308 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:04:11.507331 kubelet[2308]: I0213 20:04:11.507017 2308 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:04:11.508060 kubelet[2308]: I0213 20:04:11.507971 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:04:11.508267 kubelet[2308]: I0213 20:04:11.508233 2308 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:04:11.510830 kubelet[2308]: I0213 20:04:11.510786 2308 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:04:11.515391 kubelet[2308]: I0213 20:04:11.514745 2308 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:04:11.515391 kubelet[2308]: E0213 20:04:11.515156 2308 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-1-1-94e317dfd2\" not found" Feb 13 20:04:11.515957 kubelet[2308]: I0213 20:04:11.515933 2308 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:04:11.516286 kubelet[2308]: I0213 20:04:11.516076 2308 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:04:11.518419 kubelet[2308]: E0213 20:04:11.516901 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.3.212:6443/api/v1/namespaces/default/events\": dial tcp 49.13.3.212:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-1-1-94e317dfd2.1823dd2985ccc9e4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-1-94e317dfd2,UID:ci-4081-3-1-1-94e317dfd2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-1-94e317dfd2,},FirstTimestamp:2025-02-13 20:04:11.503331812 +0000 UTC m=+1.497539593,LastTimestamp:2025-02-13 20:04:11.503331812 +0000 UTC m=+1.497539593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-1-94e317dfd2,}" Feb 13 20:04:11.521427 kubelet[2308]: I0213 20:04:11.521404 2308 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:04:11.522393 kubelet[2308]: I0213 20:04:11.522021 2308 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:04:11.522393 kubelet[2308]: E0213 20:04:11.522114 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.3.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-1-94e317dfd2?timeout=10s\": dial tcp 49.13.3.212:6443: connect: connection refused" interval="200ms" Feb 13 20:04:11.524173 kubelet[2308]: I0213 20:04:11.522500 2308 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:04:11.540436 kubelet[2308]: W0213 20:04:11.540313 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.3.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.3.212:6443: connect: connection refused Feb 13 20:04:11.540625 kubelet[2308]: E0213 20:04:11.540605 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.3.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.3.212:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:04:11.540721 kubelet[2308]: I0213 20:04:11.540681 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:04:11.542297 kubelet[2308]: I0213 20:04:11.542260 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:04:11.542297 kubelet[2308]: I0213 20:04:11.542295 2308 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:04:11.542453 kubelet[2308]: I0213 20:04:11.542322 2308 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:04:11.542453 kubelet[2308]: E0213 20:04:11.542411 2308 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:04:11.548642 kubelet[2308]: I0213 20:04:11.548588 2308 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:04:11.548642 kubelet[2308]: I0213 20:04:11.548611 2308 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:04:11.548642 kubelet[2308]: I0213 20:04:11.548632 2308 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:04:11.551259 kubelet[2308]: W0213 20:04:11.551148 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.3.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.3.212:6443: connect: connection refused Feb 13 20:04:11.551259 kubelet[2308]: E0213 20:04:11.551218 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.3.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.3.212:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:04:11.551694 kubelet[2308]: I0213 20:04:11.551655 2308 policy_none.go:49] "None policy: Start" Feb 13 20:04:11.552862 kubelet[2308]: I0213 20:04:11.552517 2308 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:04:11.552862 kubelet[2308]: I0213 20:04:11.552547 2308 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:04:11.563071 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:04:11.583440 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:04:11.588591 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:04:11.601196 kubelet[2308]: I0213 20:04:11.601160 2308 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:04:11.601792 kubelet[2308]: I0213 20:04:11.601569 2308 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:04:11.601792 kubelet[2308]: I0213 20:04:11.601588 2308 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:04:11.603644 kubelet[2308]: I0213 20:04:11.603621 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:04:11.606231 kubelet[2308]: E0213 20:04:11.606137 2308 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-1-1-94e317dfd2\" not found" Feb 13 20:04:11.657523 systemd[1]: Created slice kubepods-burstable-pode72b88b003c39e92cf64d2dc68fc74d4.slice - libcontainer container kubepods-burstable-pode72b88b003c39e92cf64d2dc68fc74d4.slice. Feb 13 20:04:11.672724 systemd[1]: Created slice kubepods-burstable-pod7ae660e38ca9bf009d2ed45a5e6ce7a7.slice - libcontainer container kubepods-burstable-pod7ae660e38ca9bf009d2ed45a5e6ce7a7.slice. Feb 13 20:04:11.690369 systemd[1]: Created slice kubepods-burstable-podebd583a411165856a31e6d56bdb0caed.slice - libcontainer container kubepods-burstable-podebd583a411165856a31e6d56bdb0caed.slice. Feb 13 20:04:11.705246 kubelet[2308]: I0213 20:04:11.705175 2308 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.705784 kubelet[2308]: E0213 20:04:11.705604 2308 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://49.13.3.212:6443/api/v1/nodes\": dial tcp 49.13.3.212:6443: connect: connection refused" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.722797 kubelet[2308]: E0213 20:04:11.722720 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.3.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-1-94e317dfd2?timeout=10s\": dial tcp 49.13.3.212:6443: connect: connection refused" interval="400ms" Feb 13 20:04:11.823365 kubelet[2308]: I0213 20:04:11.823295 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e72b88b003c39e92cf64d2dc68fc74d4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-1-94e317dfd2\" (UID: \"e72b88b003c39e92cf64d2dc68fc74d4\") " pod="kube-system/kube-apiserver-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.823522 kubelet[2308]: I0213 20:04:11.823410 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e72b88b003c39e92cf64d2dc68fc74d4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-1-94e317dfd2\" (UID: \"e72b88b003c39e92cf64d2dc68fc74d4\") " pod="kube-system/kube-apiserver-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.823522 kubelet[2308]: I0213 20:04:11.823437 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7ae660e38ca9bf009d2ed45a5e6ce7a7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-1-94e317dfd2\" (UID: \"7ae660e38ca9bf009d2ed45a5e6ce7a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.823522 kubelet[2308]: I0213 20:04:11.823455 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ae660e38ca9bf009d2ed45a5e6ce7a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-1-94e317dfd2\" (UID: \"7ae660e38ca9bf009d2ed45a5e6ce7a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.823522 kubelet[2308]: I0213 20:04:11.823472 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebd583a411165856a31e6d56bdb0caed-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-1-94e317dfd2\" (UID: \"ebd583a411165856a31e6d56bdb0caed\") " pod="kube-system/kube-scheduler-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.823522 kubelet[2308]: I0213 20:04:11.823487 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e72b88b003c39e92cf64d2dc68fc74d4-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-1-94e317dfd2\" (UID: \"e72b88b003c39e92cf64d2dc68fc74d4\") " pod="kube-system/kube-apiserver-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.823668 kubelet[2308]: I0213 20:04:11.823506 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ae660e38ca9bf009d2ed45a5e6ce7a7-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-1-94e317dfd2\" (UID: \"7ae660e38ca9bf009d2ed45a5e6ce7a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.823668 kubelet[2308]: I0213 20:04:11.823522 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ae660e38ca9bf009d2ed45a5e6ce7a7-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-1-94e317dfd2\" (UID: \"7ae660e38ca9bf009d2ed45a5e6ce7a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.823668 kubelet[2308]: I0213 20:04:11.823536 2308 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ae660e38ca9bf009d2ed45a5e6ce7a7-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-1-94e317dfd2\" (UID: \"7ae660e38ca9bf009d2ed45a5e6ce7a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.909606 kubelet[2308]: I0213 20:04:11.909542 2308 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.910249 kubelet[2308]: E0213 20:04:11.910204 2308 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://49.13.3.212:6443/api/v1/nodes\": dial tcp 49.13.3.212:6443: connect: connection refused" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:11.968558 containerd[1465]: time="2025-02-13T20:04:11.968359466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-1-94e317dfd2,Uid:e72b88b003c39e92cf64d2dc68fc74d4,Namespace:kube-system,Attempt:0,}" Feb 13 20:04:11.988729 containerd[1465]: time="2025-02-13T20:04:11.988146824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-1-94e317dfd2,Uid:7ae660e38ca9bf009d2ed45a5e6ce7a7,Namespace:kube-system,Attempt:0,}" Feb 13 20:04:11.999874 containerd[1465]: time="2025-02-13T20:04:11.999494095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-1-94e317dfd2,Uid:ebd583a411165856a31e6d56bdb0caed,Namespace:kube-system,Attempt:0,}" Feb 13 20:04:12.124083 kubelet[2308]: E0213 20:04:12.123996 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.3.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-1-94e317dfd2?timeout=10s\": dial tcp 49.13.3.212:6443: connect: connection refused" interval="800ms" Feb 13 20:04:12.320380 kubelet[2308]: I0213 20:04:12.320009 2308 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:12.320991 kubelet[2308]: E0213 20:04:12.320946 2308 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://49.13.3.212:6443/api/v1/nodes\": dial tcp 49.13.3.212:6443: connect: connection refused" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:12.423500 kubelet[2308]: W0213 20:04:12.423325 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.3.212:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.3.212:6443: connect: connection refused Feb 13 20:04:12.423500 kubelet[2308]: E0213 20:04:12.423458 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.3.212:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.3.212:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:04:12.547810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3246413951.mount: Deactivated successfully. Feb 13 20:04:12.558823 containerd[1465]: time="2025-02-13T20:04:12.557765779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:04:12.562659 containerd[1465]: time="2025-02-13T20:04:12.562615133Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Feb 13 20:04:12.563686 containerd[1465]: time="2025-02-13T20:04:12.563640210Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:04:12.565494 containerd[1465]: time="2025-02-13T20:04:12.565440253Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:04:12.568052 containerd[1465]: time="2025-02-13T20:04:12.566894872Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:04:12.568052 containerd[1465]: time="2025-02-13T20:04:12.567955786Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:04:12.568393 containerd[1465]: time="2025-02-13T20:04:12.568333650Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:04:12.571099 containerd[1465]: time="2025-02-13T20:04:12.570965779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:04:12.573404 containerd[1465]: time="2025-02-13T20:04:12.573354877Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 604.888696ms" Feb 13 20:04:12.577100 containerd[1465]: time="2025-02-13T20:04:12.577053360Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 588.765223ms" Feb 13 20:04:12.579835 containerd[1465]: time="2025-02-13T20:04:12.579793564Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.214992ms" Feb 13 20:04:12.696655 containerd[1465]: time="2025-02-13T20:04:12.696344854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:04:12.696655 containerd[1465]: time="2025-02-13T20:04:12.696433610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:04:12.696655 containerd[1465]: time="2025-02-13T20:04:12.696448769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:12.696655 containerd[1465]: time="2025-02-13T20:04:12.696553605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:12.705819 containerd[1465]: time="2025-02-13T20:04:12.705613700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:04:12.705819 containerd[1465]: time="2025-02-13T20:04:12.705781613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:04:12.705819 containerd[1465]: time="2025-02-13T20:04:12.705795572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:12.707332 containerd[1465]: time="2025-02-13T20:04:12.707053599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:04:12.707332 containerd[1465]: time="2025-02-13T20:04:12.707182394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:04:12.707332 containerd[1465]: time="2025-02-13T20:04:12.707211592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:12.707985 containerd[1465]: time="2025-02-13T20:04:12.707423863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:12.709541 containerd[1465]: time="2025-02-13T20:04:12.709384140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:12.727263 systemd[1]: Started cri-containerd-39781bf275060551236a906b156d6641813216198e833a26a6751350b7c6b561.scope - libcontainer container 39781bf275060551236a906b156d6641813216198e833a26a6751350b7c6b561. Feb 13 20:04:12.731788 kubelet[2308]: W0213 20:04:12.731397 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.3.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.3.212:6443: connect: connection refused Feb 13 20:04:12.731788 kubelet[2308]: E0213 20:04:12.731453 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.3.212:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.3.212:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:04:12.746374 systemd[1]: Started cri-containerd-593b71b782e4f186a9e5afa8cf635196b8dfe4054053dee67028bad3040f4dfa.scope - libcontainer container 593b71b782e4f186a9e5afa8cf635196b8dfe4054053dee67028bad3040f4dfa. Feb 13 20:04:12.748821 systemd[1]: Started cri-containerd-6935aeb7ef32859fcf4b4ac4857c36d4cd9a0398d7e0ece12cc6de2828a09c4d.scope - libcontainer container 6935aeb7ef32859fcf4b4ac4857c36d4cd9a0398d7e0ece12cc6de2828a09c4d. Feb 13 20:04:12.795913 containerd[1465]: time="2025-02-13T20:04:12.795828909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-1-94e317dfd2,Uid:e72b88b003c39e92cf64d2dc68fc74d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"39781bf275060551236a906b156d6641813216198e833a26a6751350b7c6b561\"" Feb 13 20:04:12.804115 containerd[1465]: time="2025-02-13T20:04:12.803943804Z" level=info msg="CreateContainer within sandbox \"39781bf275060551236a906b156d6641813216198e833a26a6751350b7c6b561\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:04:12.819061 kubelet[2308]: W0213 20:04:12.818829 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.3.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.3.212:6443: connect: connection refused Feb 13 20:04:12.819061 kubelet[2308]: E0213 20:04:12.818947 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.3.212:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.3.212:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:04:12.826802 kubelet[2308]: W0213 20:04:12.826666 2308 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.3.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-1-94e317dfd2&limit=500&resourceVersion=0": dial tcp 49.13.3.212:6443: connect: connection refused Feb 13 20:04:12.826802 kubelet[2308]: E0213 20:04:12.826740 2308 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.3.212:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-1-94e317dfd2&limit=500&resourceVersion=0\": dial tcp 49.13.3.212:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:04:12.828629 containerd[1465]: time="2025-02-13T20:04:12.828521200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-1-94e317dfd2,Uid:7ae660e38ca9bf009d2ed45a5e6ce7a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"593b71b782e4f186a9e5afa8cf635196b8dfe4054053dee67028bad3040f4dfa\"" Feb 13 20:04:12.829082 containerd[1465]: time="2025-02-13T20:04:12.828945342Z" level=info msg="CreateContainer within sandbox \"39781bf275060551236a906b156d6641813216198e833a26a6751350b7c6b561\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"351cb73d44ab936a4069cf2e03a88d4ae69d0d99d0b3f164d35894b4e5d48af6\"" Feb 13 20:04:12.832618 containerd[1465]: time="2025-02-13T20:04:12.830797904Z" level=info msg="StartContainer for \"351cb73d44ab936a4069cf2e03a88d4ae69d0d99d0b3f164d35894b4e5d48af6\"" Feb 13 20:04:12.833265 containerd[1465]: time="2025-02-13T20:04:12.833061007Z" level=info msg="CreateContainer within sandbox \"593b71b782e4f186a9e5afa8cf635196b8dfe4054053dee67028bad3040f4dfa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:04:12.842822 containerd[1465]: time="2025-02-13T20:04:12.842781675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-1-94e317dfd2,Uid:ebd583a411165856a31e6d56bdb0caed,Namespace:kube-system,Attempt:0,} returns sandbox id \"6935aeb7ef32859fcf4b4ac4857c36d4cd9a0398d7e0ece12cc6de2828a09c4d\"" Feb 13 20:04:12.847864 containerd[1465]: time="2025-02-13T20:04:12.847446117Z" level=info msg="CreateContainer within sandbox \"6935aeb7ef32859fcf4b4ac4857c36d4cd9a0398d7e0ece12cc6de2828a09c4d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:04:12.869236 systemd[1]: Started cri-containerd-351cb73d44ab936a4069cf2e03a88d4ae69d0d99d0b3f164d35894b4e5d48af6.scope - libcontainer container 351cb73d44ab936a4069cf2e03a88d4ae69d0d99d0b3f164d35894b4e5d48af6. Feb 13 20:04:12.870427 containerd[1465]: time="2025-02-13T20:04:12.870371063Z" level=info msg="CreateContainer within sandbox \"593b71b782e4f186a9e5afa8cf635196b8dfe4054053dee67028bad3040f4dfa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d86de051263f10cd4114f5bf22ec3e42de980ac251bbc5d1eb5d037a7aad348\"" Feb 13 20:04:12.871437 containerd[1465]: time="2025-02-13T20:04:12.871404099Z" level=info msg="StartContainer for \"4d86de051263f10cd4114f5bf22ec3e42de980ac251bbc5d1eb5d037a7aad348\"" Feb 13 20:04:12.886118 containerd[1465]: time="2025-02-13T20:04:12.886072716Z" level=info msg="CreateContainer within sandbox \"6935aeb7ef32859fcf4b4ac4857c36d4cd9a0398d7e0ece12cc6de2828a09c4d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f05719346204c5cef71aad8f28f48ffa584f6cd27f1ffc3ce455959a6bd7185\"" Feb 13 20:04:12.888664 containerd[1465]: time="2025-02-13T20:04:12.887934117Z" level=info msg="StartContainer for \"3f05719346204c5cef71aad8f28f48ffa584f6cd27f1ffc3ce455959a6bd7185\"" Feb 13 20:04:12.914311 systemd[1]: Started cri-containerd-4d86de051263f10cd4114f5bf22ec3e42de980ac251bbc5d1eb5d037a7aad348.scope - libcontainer container 4d86de051263f10cd4114f5bf22ec3e42de980ac251bbc5d1eb5d037a7aad348. Feb 13 20:04:12.925400 kubelet[2308]: E0213 20:04:12.925143 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.3.212:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-1-94e317dfd2?timeout=10s\": dial tcp 49.13.3.212:6443: connect: connection refused" interval="1.6s" Feb 13 20:04:12.932983 containerd[1465]: time="2025-02-13T20:04:12.932931366Z" level=info msg="StartContainer for \"351cb73d44ab936a4069cf2e03a88d4ae69d0d99d0b3f164d35894b4e5d48af6\" returns successfully" Feb 13 20:04:12.950961 systemd[1]: Started cri-containerd-3f05719346204c5cef71aad8f28f48ffa584f6cd27f1ffc3ce455959a6bd7185.scope - libcontainer container 3f05719346204c5cef71aad8f28f48ffa584f6cd27f1ffc3ce455959a6bd7185. Feb 13 20:04:12.976634 containerd[1465]: time="2025-02-13T20:04:12.976503755Z" level=info msg="StartContainer for \"4d86de051263f10cd4114f5bf22ec3e42de980ac251bbc5d1eb5d037a7aad348\" returns successfully" Feb 13 20:04:13.029506 containerd[1465]: time="2025-02-13T20:04:13.029457900Z" level=info msg="StartContainer for \"3f05719346204c5cef71aad8f28f48ffa584f6cd27f1ffc3ce455959a6bd7185\" returns successfully" Feb 13 20:04:13.124073 kubelet[2308]: I0213 20:04:13.124042 2308 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:15.024599 kubelet[2308]: E0213 20:04:15.024542 2308 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-1-1-94e317dfd2\" not found" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:15.059045 kubelet[2308]: I0213 20:04:15.057658 2308 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:15.059045 kubelet[2308]: E0213 20:04:15.057707 2308 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-1-1-94e317dfd2\": node \"ci-4081-3-1-1-94e317dfd2\" not found" Feb 13 20:04:15.086798 kubelet[2308]: E0213 20:04:15.086754 2308 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-1-1-94e317dfd2\" not found" Feb 13 20:04:15.187157 kubelet[2308]: E0213 20:04:15.187100 2308 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-1-1-94e317dfd2\" not found" Feb 13 20:04:15.501683 kubelet[2308]: I0213 20:04:15.501398 2308 apiserver.go:52] "Watching apiserver" Feb 13 20:04:15.522398 kubelet[2308]: I0213 20:04:15.522339 2308 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:04:15.574480 kubelet[2308]: E0213 20:04:15.574433 2308 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-1-1-94e317dfd2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:17.457813 systemd[1]: Reloading requested from client PID 2582 ('systemctl') (unit session-7.scope)... Feb 13 20:04:17.458196 systemd[1]: Reloading... Feb 13 20:04:17.549069 zram_generator::config[2618]: No configuration found. Feb 13 20:04:17.665776 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:04:17.747758 systemd[1]: Reloading finished in 289 ms. Feb 13 20:04:17.788217 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:04:17.803930 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:04:17.804381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:04:17.804471 systemd[1]: kubelet.service: Consumed 1.905s CPU time, 117.8M memory peak, 0B memory swap peak. Feb 13 20:04:17.813529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:04:17.917946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:04:17.928587 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:04:17.978217 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:04:17.978217 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:04:17.978217 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:04:17.978217 kubelet[2667]: I0213 20:04:17.977615 2667 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:04:17.990737 kubelet[2667]: I0213 20:04:17.990697 2667 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:04:17.990737 kubelet[2667]: I0213 20:04:17.990730 2667 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:04:17.990967 kubelet[2667]: I0213 20:04:17.990951 2667 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:04:17.992667 kubelet[2667]: I0213 20:04:17.992641 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:04:17.994915 kubelet[2667]: I0213 20:04:17.994763 2667 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:04:17.998602 kubelet[2667]: E0213 20:04:17.998512 2667 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:04:17.998602 kubelet[2667]: I0213 20:04:17.998544 2667 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:04:18.002676 kubelet[2667]: I0213 20:04:18.001210 2667 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:04:18.002676 kubelet[2667]: I0213 20:04:18.001380 2667 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:04:18.002676 kubelet[2667]: I0213 20:04:18.001546 2667 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:04:18.002676 kubelet[2667]: I0213 20:04:18.001583 2667 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-1-94e317dfd2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:04:18.003049 kubelet[2667]: I0213 20:04:18.001818 2667 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:04:18.003049 kubelet[2667]: I0213 20:04:18.001832 2667 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:04:18.003049 kubelet[2667]: I0213 20:04:18.001868 2667 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:04:18.003049 kubelet[2667]: I0213 20:04:18.001992 2667 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:04:18.003049 kubelet[2667]: I0213 20:04:18.002008 2667 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:04:18.003049 kubelet[2667]: I0213 20:04:18.002049 2667 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:04:18.003049 kubelet[2667]: I0213 20:04:18.002064 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:04:18.003049 kubelet[2667]: I0213 20:04:18.002878 2667 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:04:18.003432 kubelet[2667]: I0213 20:04:18.003404 2667 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:04:18.003847 kubelet[2667]: I0213 20:04:18.003817 2667 server.go:1269] "Started kubelet" Feb 13 20:04:18.006822 kubelet[2667]: I0213 20:04:18.006778 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:04:18.016037 kubelet[2667]: I0213 20:04:18.011919 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:04:18.017455 kubelet[2667]: I0213 20:04:18.017408 2667 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:04:18.020040 kubelet[2667]: I0213 20:04:18.018308 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:04:18.020040 kubelet[2667]: I0213 20:04:18.018600 2667 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:04:18.020040 kubelet[2667]: I0213 20:04:18.019844 2667 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:04:18.020255 kubelet[2667]: E0213 20:04:18.020148 2667 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-1-1-94e317dfd2\" not found" Feb 13 20:04:18.027045 kubelet[2667]: I0213 20:04:18.024365 2667 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:04:18.027045 kubelet[2667]: I0213 20:04:18.024508 2667 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:04:18.029031 kubelet[2667]: I0213 20:04:18.027292 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:04:18.029031 kubelet[2667]: I0213 20:04:18.028150 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:04:18.029031 kubelet[2667]: I0213 20:04:18.028179 2667 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:04:18.029031 kubelet[2667]: I0213 20:04:18.028197 2667 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:04:18.029031 kubelet[2667]: E0213 20:04:18.028241 2667 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:04:18.038478 kubelet[2667]: I0213 20:04:18.036533 2667 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:04:18.045161 kubelet[2667]: I0213 20:04:18.040624 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:04:18.053591 kubelet[2667]: I0213 20:04:18.053562 2667 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:04:18.053728 kubelet[2667]: I0213 20:04:18.053718 2667 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:04:18.089231 kubelet[2667]: E0213 20:04:18.089201 2667 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:04:18.128512 kubelet[2667]: E0213 20:04:18.128466 2667 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 20:04:18.131766 kubelet[2667]: I0213 20:04:18.131741 2667 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:04:18.131766 kubelet[2667]: I0213 20:04:18.131758 2667 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:04:18.131913 kubelet[2667]: I0213 20:04:18.131780 2667 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:04:18.131951 kubelet[2667]: I0213 20:04:18.131933 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:04:18.131980 kubelet[2667]: I0213 20:04:18.131949 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:04:18.131980 kubelet[2667]: I0213 20:04:18.131969 2667 policy_none.go:49] "None policy: Start" Feb 13 20:04:18.133582 kubelet[2667]: I0213 20:04:18.133406 2667 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:04:18.133582 kubelet[2667]: I0213 20:04:18.133436 2667 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:04:18.133734 kubelet[2667]: I0213 20:04:18.133616 2667 state_mem.go:75] "Updated machine memory state" Feb 13 20:04:18.146551 kubelet[2667]: I0213 20:04:18.145456 2667 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:04:18.146551 kubelet[2667]: I0213 20:04:18.145696 2667 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:04:18.146551 kubelet[2667]: I0213 20:04:18.145710 2667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:04:18.146551 kubelet[2667]: I0213 20:04:18.146394 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:04:18.266938 kubelet[2667]: I0213 20:04:18.265469 2667 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.283834 kubelet[2667]: I0213 20:04:18.283791 2667 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.284484 kubelet[2667]: I0213 20:04:18.284461 2667 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.428440 kubelet[2667]: I0213 20:04:18.428125 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e72b88b003c39e92cf64d2dc68fc74d4-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-1-94e317dfd2\" (UID: \"e72b88b003c39e92cf64d2dc68fc74d4\") " pod="kube-system/kube-apiserver-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.428440 kubelet[2667]: I0213 20:04:18.428184 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ae660e38ca9bf009d2ed45a5e6ce7a7-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-1-94e317dfd2\" (UID: \"7ae660e38ca9bf009d2ed45a5e6ce7a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.428440 kubelet[2667]: I0213 20:04:18.428211 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7ae660e38ca9bf009d2ed45a5e6ce7a7-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-1-94e317dfd2\" (UID: \"7ae660e38ca9bf009d2ed45a5e6ce7a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.428440 kubelet[2667]: I0213 20:04:18.428234 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ae660e38ca9bf009d2ed45a5e6ce7a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-1-94e317dfd2\" (UID: \"7ae660e38ca9bf009d2ed45a5e6ce7a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.428440 kubelet[2667]: I0213 20:04:18.428278 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebd583a411165856a31e6d56bdb0caed-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-1-94e317dfd2\" (UID: \"ebd583a411165856a31e6d56bdb0caed\") " pod="kube-system/kube-scheduler-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.428799 kubelet[2667]: I0213 20:04:18.428302 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e72b88b003c39e92cf64d2dc68fc74d4-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-1-94e317dfd2\" (UID: \"e72b88b003c39e92cf64d2dc68fc74d4\") " pod="kube-system/kube-apiserver-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.428799 kubelet[2667]: I0213 20:04:18.428325 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e72b88b003c39e92cf64d2dc68fc74d4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-1-94e317dfd2\" (UID: \"e72b88b003c39e92cf64d2dc68fc74d4\") " pod="kube-system/kube-apiserver-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.428799 kubelet[2667]: I0213 20:04:18.428348 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7ae660e38ca9bf009d2ed45a5e6ce7a7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-1-94e317dfd2\" (UID: \"7ae660e38ca9bf009d2ed45a5e6ce7a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.428799 kubelet[2667]: I0213 20:04:18.428372 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ae660e38ca9bf009d2ed45a5e6ce7a7-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-1-94e317dfd2\" (UID: \"7ae660e38ca9bf009d2ed45a5e6ce7a7\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-1-94e317dfd2" Feb 13 20:04:18.452043 sudo[2700]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 20:04:18.452474 sudo[2700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 20:04:18.908243 sudo[2700]: pam_unix(sudo:session): session closed for user root Feb 13 20:04:19.008128 kubelet[2667]: I0213 20:04:19.007109 2667 apiserver.go:52] "Watching apiserver" Feb 13 20:04:19.026256 kubelet[2667]: I0213 20:04:19.026184 2667 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:04:19.065252 kubelet[2667]: I0213 20:04:19.065181 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-1-1-94e317dfd2" podStartSLOduration=1.065145424 podStartE2EDuration="1.065145424s" podCreationTimestamp="2025-02-13 20:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:04:19.062387751 +0000 UTC m=+1.127502802" watchObservedRunningTime="2025-02-13 20:04:19.065145424 +0000 UTC m=+1.130260475" Feb 13 20:04:19.088315 kubelet[2667]: I0213 20:04:19.088182 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-1-1-94e317dfd2" podStartSLOduration=1.088148313 podStartE2EDuration="1.088148313s" podCreationTimestamp="2025-02-13 20:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:04:19.075149414 +0000 UTC m=+1.140264425" watchObservedRunningTime="2025-02-13 20:04:19.088148313 +0000 UTC m=+1.153263404" Feb 13 20:04:19.103059 kubelet[2667]: I0213 20:04:19.102970 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-1-1-94e317dfd2" podStartSLOduration=1.102914262 podStartE2EDuration="1.102914262s" podCreationTimestamp="2025-02-13 20:04:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:04:19.088671624 +0000 UTC m=+1.153786675" watchObservedRunningTime="2025-02-13 20:04:19.102914262 +0000 UTC m=+1.168029353" Feb 13 20:04:21.368288 sudo[1839]: pam_unix(sudo:session): session closed for user root Feb 13 20:04:21.527435 sshd[1836]: pam_unix(sshd:session): session closed for user core Feb 13 20:04:21.533953 systemd[1]: sshd@6-49.13.3.212:22-147.75.109.163:36560.service: Deactivated successfully. Feb 13 20:04:21.538053 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:04:21.538485 systemd[1]: session-7.scope: Consumed 8.290s CPU time, 154.9M memory peak, 0B memory swap peak. Feb 13 20:04:21.539867 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:04:21.541074 systemd-logind[1443]: Removed session 7. Feb 13 20:04:22.805305 kubelet[2667]: I0213 20:04:22.805245 2667 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:04:22.806261 containerd[1465]: time="2025-02-13T20:04:22.806216940Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:04:22.807020 kubelet[2667]: I0213 20:04:22.806466 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:04:23.484916 systemd[1]: Created slice kubepods-besteffort-pod1240bce9_8d5d_4df7_bc50_d660bc51c168.slice - libcontainer container kubepods-besteffort-pod1240bce9_8d5d_4df7_bc50_d660bc51c168.slice. Feb 13 20:04:23.514979 systemd[1]: Created slice kubepods-burstable-pod0f913e46_84c7_4b14_80ff_24e7ab8059c1.slice - libcontainer container kubepods-burstable-pod0f913e46_84c7_4b14_80ff_24e7ab8059c1.slice. Feb 13 20:04:23.561682 kubelet[2667]: I0213 20:04:23.560928 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1240bce9-8d5d-4df7-bc50-d660bc51c168-kube-proxy\") pod \"kube-proxy-z54tl\" (UID: \"1240bce9-8d5d-4df7-bc50-d660bc51c168\") " pod="kube-system/kube-proxy-z54tl" Feb 13 20:04:23.561682 kubelet[2667]: I0213 20:04:23.560991 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-run\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.561682 kubelet[2667]: I0213 20:04:23.561013 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1240bce9-8d5d-4df7-bc50-d660bc51c168-xtables-lock\") pod \"kube-proxy-z54tl\" (UID: \"1240bce9-8d5d-4df7-bc50-d660bc51c168\") " pod="kube-system/kube-proxy-z54tl" Feb 13 20:04:23.561682 kubelet[2667]: I0213 20:04:23.561041 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-etc-cni-netd\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.561682 kubelet[2667]: I0213 20:04:23.561060 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-xtables-lock\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.561682 kubelet[2667]: I0213 20:04:23.561078 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f913e46-84c7-4b14-80ff-24e7ab8059c1-hubble-tls\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.561935 kubelet[2667]: I0213 20:04:23.561094 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1240bce9-8d5d-4df7-bc50-d660bc51c168-lib-modules\") pod \"kube-proxy-z54tl\" (UID: \"1240bce9-8d5d-4df7-bc50-d660bc51c168\") " pod="kube-system/kube-proxy-z54tl" Feb 13 20:04:23.561935 kubelet[2667]: I0213 20:04:23.561113 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfcxd\" (UniqueName: \"kubernetes.io/projected/1240bce9-8d5d-4df7-bc50-d660bc51c168-kube-api-access-jfcxd\") pod \"kube-proxy-z54tl\" (UID: \"1240bce9-8d5d-4df7-bc50-d660bc51c168\") " pod="kube-system/kube-proxy-z54tl" Feb 13 20:04:23.561935 kubelet[2667]: I0213 20:04:23.561131 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-bpf-maps\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.561935 kubelet[2667]: I0213 20:04:23.561148 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-host-proc-sys-net\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.561935 kubelet[2667]: I0213 20:04:23.561164 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-host-proc-sys-kernel\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.562117 kubelet[2667]: I0213 20:04:23.561178 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsvwz\" (UniqueName: \"kubernetes.io/projected/0f913e46-84c7-4b14-80ff-24e7ab8059c1-kube-api-access-zsvwz\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.562117 kubelet[2667]: I0213 20:04:23.561193 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-hostproc\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.562117 kubelet[2667]: I0213 20:04:23.561232 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-lib-modules\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.562117 kubelet[2667]: I0213 20:04:23.561246 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-config-path\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.562117 kubelet[2667]: I0213 20:04:23.561261 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-cgroup\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.562117 kubelet[2667]: I0213 20:04:23.561276 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f913e46-84c7-4b14-80ff-24e7ab8059c1-clustermesh-secrets\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.562243 kubelet[2667]: I0213 20:04:23.561294 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cni-path\") pod \"cilium-4hq92\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " pod="kube-system/cilium-4hq92" Feb 13 20:04:23.796302 containerd[1465]: time="2025-02-13T20:04:23.795725251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z54tl,Uid:1240bce9-8d5d-4df7-bc50-d660bc51c168,Namespace:kube-system,Attempt:0,}" Feb 13 20:04:23.828691 containerd[1465]: time="2025-02-13T20:04:23.828651173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4hq92,Uid:0f913e46-84c7-4b14-80ff-24e7ab8059c1,Namespace:kube-system,Attempt:0,}" Feb 13 20:04:23.833928 systemd[1]: Created slice kubepods-besteffort-pod3fc588af_6e71_4d0e_b042_cd18a190feef.slice - libcontainer container kubepods-besteffort-pod3fc588af_6e71_4d0e_b042_cd18a190feef.slice. Feb 13 20:04:23.840628 containerd[1465]: time="2025-02-13T20:04:23.840493156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:04:23.840628 containerd[1465]: time="2025-02-13T20:04:23.840573196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:04:23.840628 containerd[1465]: time="2025-02-13T20:04:23.840585356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:23.842100 containerd[1465]: time="2025-02-13T20:04:23.840866994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:23.864199 kubelet[2667]: I0213 20:04:23.864120 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fddpt\" (UniqueName: \"kubernetes.io/projected/3fc588af-6e71-4d0e-b042-cd18a190feef-kube-api-access-fddpt\") pod \"cilium-operator-5d85765b45-7mq6j\" (UID: \"3fc588af-6e71-4d0e-b042-cd18a190feef\") " pod="kube-system/cilium-operator-5d85765b45-7mq6j" Feb 13 20:04:23.864199 kubelet[2667]: I0213 20:04:23.864166 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fc588af-6e71-4d0e-b042-cd18a190feef-cilium-config-path\") pod \"cilium-operator-5d85765b45-7mq6j\" (UID: \"3fc588af-6e71-4d0e-b042-cd18a190feef\") " pod="kube-system/cilium-operator-5d85765b45-7mq6j" Feb 13 20:04:23.881242 systemd[1]: Started cri-containerd-f4ea6b3da0df176c62978403403d8a93616401c0c755ec36d67e653b56146313.scope - libcontainer container f4ea6b3da0df176c62978403403d8a93616401c0c755ec36d67e653b56146313. Feb 13 20:04:23.893698 containerd[1465]: time="2025-02-13T20:04:23.893436462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:04:23.893698 containerd[1465]: time="2025-02-13T20:04:23.893494901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:04:23.893698 containerd[1465]: time="2025-02-13T20:04:23.893515621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:23.893698 containerd[1465]: time="2025-02-13T20:04:23.893602541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:23.918974 systemd[1]: Started cri-containerd-061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa.scope - libcontainer container 061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa. Feb 13 20:04:23.922343 containerd[1465]: time="2025-02-13T20:04:23.922221403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z54tl,Uid:1240bce9-8d5d-4df7-bc50-d660bc51c168,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4ea6b3da0df176c62978403403d8a93616401c0c755ec36d67e653b56146313\"" Feb 13 20:04:23.928071 containerd[1465]: time="2025-02-13T20:04:23.927889856Z" level=info msg="CreateContainer within sandbox \"f4ea6b3da0df176c62978403403d8a93616401c0c755ec36d67e653b56146313\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:04:23.948067 containerd[1465]: time="2025-02-13T20:04:23.947444562Z" level=info msg="CreateContainer within sandbox \"f4ea6b3da0df176c62978403403d8a93616401c0c755ec36d67e653b56146313\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f1b7c3919c500f8f4a40838af0759f445ead9389365c4e3847af973ec4e9115\"" Feb 13 20:04:23.950065 containerd[1465]: time="2025-02-13T20:04:23.949701551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4hq92,Uid:0f913e46-84c7-4b14-80ff-24e7ab8059c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\"" Feb 13 20:04:23.950065 containerd[1465]: time="2025-02-13T20:04:23.949992550Z" level=info msg="StartContainer for \"8f1b7c3919c500f8f4a40838af0759f445ead9389365c4e3847af973ec4e9115\"" Feb 13 20:04:23.954682 containerd[1465]: time="2025-02-13T20:04:23.954523008Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 20:04:23.989276 systemd[1]: Started cri-containerd-8f1b7c3919c500f8f4a40838af0759f445ead9389365c4e3847af973ec4e9115.scope - libcontainer container 8f1b7c3919c500f8f4a40838af0759f445ead9389365c4e3847af973ec4e9115. Feb 13 20:04:24.019384 containerd[1465]: time="2025-02-13T20:04:24.019293468Z" level=info msg="StartContainer for \"8f1b7c3919c500f8f4a40838af0759f445ead9389365c4e3847af973ec4e9115\" returns successfully" Feb 13 20:04:24.141802 containerd[1465]: time="2025-02-13T20:04:24.141729545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7mq6j,Uid:3fc588af-6e71-4d0e-b042-cd18a190feef,Namespace:kube-system,Attempt:0,}" Feb 13 20:04:24.173404 containerd[1465]: time="2025-02-13T20:04:24.173085963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:04:24.173404 containerd[1465]: time="2025-02-13T20:04:24.173173642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:04:24.173404 containerd[1465]: time="2025-02-13T20:04:24.173198482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:24.174077 containerd[1465]: time="2025-02-13T20:04:24.173812921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:24.196243 systemd[1]: Started cri-containerd-7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e.scope - libcontainer container 7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e. Feb 13 20:04:24.253808 containerd[1465]: time="2025-02-13T20:04:24.253722282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-7mq6j,Uid:3fc588af-6e71-4d0e-b042-cd18a190feef,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\"" Feb 13 20:04:26.015339 kubelet[2667]: I0213 20:04:26.014885 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z54tl" podStartSLOduration=3.014854978 podStartE2EDuration="3.014854978s" podCreationTimestamp="2025-02-13 20:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:04:24.138555351 +0000 UTC m=+6.203670442" watchObservedRunningTime="2025-02-13 20:04:26.014854978 +0000 UTC m=+8.079970069" Feb 13 20:04:28.311534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665454138.mount: Deactivated successfully. Feb 13 20:04:29.799576 containerd[1465]: time="2025-02-13T20:04:29.799366997Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:29.804054 containerd[1465]: time="2025-02-13T20:04:29.802393910Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 20:04:29.804912 containerd[1465]: time="2025-02-13T20:04:29.804849696Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:29.808308 containerd[1465]: time="2025-02-13T20:04:29.808258973Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.853690565s" Feb 13 20:04:29.808308 containerd[1465]: time="2025-02-13T20:04:29.808311494Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 20:04:29.810304 containerd[1465]: time="2025-02-13T20:04:29.810267035Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 20:04:29.811621 containerd[1465]: time="2025-02-13T20:04:29.811581729Z" level=info msg="CreateContainer within sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:04:29.829725 containerd[1465]: time="2025-02-13T20:04:29.829676645Z" level=info msg="CreateContainer within sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54\"" Feb 13 20:04:29.830770 containerd[1465]: time="2025-02-13T20:04:29.830660256Z" level=info msg="StartContainer for \"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54\"" Feb 13 20:04:29.860255 systemd[1]: Started cri-containerd-4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54.scope - libcontainer container 4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54. Feb 13 20:04:29.890379 containerd[1465]: time="2025-02-13T20:04:29.890247261Z" level=info msg="StartContainer for \"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54\" returns successfully" Feb 13 20:04:29.904886 systemd[1]: cri-containerd-4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54.scope: Deactivated successfully. Feb 13 20:04:30.116170 containerd[1465]: time="2025-02-13T20:04:30.115893890Z" level=info msg="shim disconnected" id=4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54 namespace=k8s.io Feb 13 20:04:30.116170 containerd[1465]: time="2025-02-13T20:04:30.116161374Z" level=warning msg="cleaning up after shim disconnected" id=4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54 namespace=k8s.io Feb 13 20:04:30.116448 containerd[1465]: time="2025-02-13T20:04:30.116186294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:04:30.140265 containerd[1465]: time="2025-02-13T20:04:30.140215610Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:04:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:04:30.148782 containerd[1465]: time="2025-02-13T20:04:30.148635601Z" level=info msg="CreateContainer within sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:04:30.168471 containerd[1465]: time="2025-02-13T20:04:30.168393461Z" level=info msg="CreateContainer within sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e\"" Feb 13 20:04:30.170390 containerd[1465]: time="2025-02-13T20:04:30.169345193Z" level=info msg="StartContainer for \"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e\"" Feb 13 20:04:30.197384 systemd[1]: Started cri-containerd-1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e.scope - libcontainer container 1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e. Feb 13 20:04:30.227674 containerd[1465]: time="2025-02-13T20:04:30.227611440Z" level=info msg="StartContainer for \"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e\" returns successfully" Feb 13 20:04:30.243187 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:04:30.243680 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:04:30.243774 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:04:30.250180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:04:30.252284 systemd[1]: cri-containerd-1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e.scope: Deactivated successfully. Feb 13 20:04:30.275411 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:04:30.282818 containerd[1465]: time="2025-02-13T20:04:30.282751285Z" level=info msg="shim disconnected" id=1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e namespace=k8s.io Feb 13 20:04:30.282818 containerd[1465]: time="2025-02-13T20:04:30.282813726Z" level=warning msg="cleaning up after shim disconnected" id=1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e namespace=k8s.io Feb 13 20:04:30.282818 containerd[1465]: time="2025-02-13T20:04:30.282823366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:04:30.824380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54-rootfs.mount: Deactivated successfully. Feb 13 20:04:31.155355 containerd[1465]: time="2025-02-13T20:04:31.155258540Z" level=info msg="CreateContainer within sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:04:31.184291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1722231824.mount: Deactivated successfully. Feb 13 20:04:31.188412 containerd[1465]: time="2025-02-13T20:04:31.188347610Z" level=info msg="CreateContainer within sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438\"" Feb 13 20:04:31.189455 containerd[1465]: time="2025-02-13T20:04:31.189423666Z" level=info msg="StartContainer for \"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438\"" Feb 13 20:04:31.231201 systemd[1]: Started cri-containerd-48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438.scope - libcontainer container 48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438. Feb 13 20:04:31.264101 containerd[1465]: time="2025-02-13T20:04:31.264021576Z" level=info msg="StartContainer for \"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438\" returns successfully" Feb 13 20:04:31.270216 systemd[1]: cri-containerd-48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438.scope: Deactivated successfully. Feb 13 20:04:31.301491 containerd[1465]: time="2025-02-13T20:04:31.301254469Z" level=info msg="shim disconnected" id=48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438 namespace=k8s.io Feb 13 20:04:31.301491 containerd[1465]: time="2025-02-13T20:04:31.301320350Z" level=warning msg="cleaning up after shim disconnected" id=48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438 namespace=k8s.io Feb 13 20:04:31.301491 containerd[1465]: time="2025-02-13T20:04:31.301331630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:04:31.824946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438-rootfs.mount: Deactivated successfully. Feb 13 20:04:32.166784 containerd[1465]: time="2025-02-13T20:04:32.166552082Z" level=info msg="CreateContainer within sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:04:32.202847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417996956.mount: Deactivated successfully. Feb 13 20:04:32.204243 containerd[1465]: time="2025-02-13T20:04:32.203757736Z" level=info msg="CreateContainer within sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b\"" Feb 13 20:04:32.207005 containerd[1465]: time="2025-02-13T20:04:32.205870813Z" level=info msg="StartContainer for \"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b\"" Feb 13 20:04:32.265509 systemd[1]: Started cri-containerd-d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b.scope - libcontainer container d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b. Feb 13 20:04:32.308732 systemd[1]: cri-containerd-d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b.scope: Deactivated successfully. Feb 13 20:04:32.318764 containerd[1465]: time="2025-02-13T20:04:32.318601956Z" level=info msg="StartContainer for \"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b\" returns successfully" Feb 13 20:04:32.394262 containerd[1465]: time="2025-02-13T20:04:32.394199446Z" level=info msg="shim disconnected" id=d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b namespace=k8s.io Feb 13 20:04:32.394743 containerd[1465]: time="2025-02-13T20:04:32.394530012Z" level=warning msg="cleaning up after shim disconnected" id=d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b namespace=k8s.io Feb 13 20:04:32.394743 containerd[1465]: time="2025-02-13T20:04:32.394653094Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:04:32.415585 containerd[1465]: time="2025-02-13T20:04:32.415468260Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:04:32Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:04:32.463225 containerd[1465]: time="2025-02-13T20:04:32.463054137Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:32.464813 containerd[1465]: time="2025-02-13T20:04:32.464729327Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 20:04:32.466392 containerd[1465]: time="2025-02-13T20:04:32.466320595Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:04:32.467784 containerd[1465]: time="2025-02-13T20:04:32.467640178Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.657175781s" Feb 13 20:04:32.467784 containerd[1465]: time="2025-02-13T20:04:32.467691499Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 20:04:32.471104 containerd[1465]: time="2025-02-13T20:04:32.470743112Z" level=info msg="CreateContainer within sandbox \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 20:04:32.497882 containerd[1465]: time="2025-02-13T20:04:32.497771948Z" level=info msg="CreateContainer within sandbox \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\"" Feb 13 20:04:32.499197 containerd[1465]: time="2025-02-13T20:04:32.499049650Z" level=info msg="StartContainer for \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\"" Feb 13 20:04:32.526252 systemd[1]: Started cri-containerd-9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803.scope - libcontainer container 9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803. Feb 13 20:04:32.553535 containerd[1465]: time="2025-02-13T20:04:32.553423607Z" level=info msg="StartContainer for \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\" returns successfully" Feb 13 20:04:32.826067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b-rootfs.mount: Deactivated successfully. Feb 13 20:04:33.173441 containerd[1465]: time="2025-02-13T20:04:33.173380797Z" level=info msg="CreateContainer within sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:04:33.199785 containerd[1465]: time="2025-02-13T20:04:33.199731676Z" level=info msg="CreateContainer within sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\"" Feb 13 20:04:33.200731 containerd[1465]: time="2025-02-13T20:04:33.200696855Z" level=info msg="StartContainer for \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\"" Feb 13 20:04:33.258275 systemd[1]: Started cri-containerd-d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef.scope - libcontainer container d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef. Feb 13 20:04:33.350204 containerd[1465]: time="2025-02-13T20:04:33.349053059Z" level=info msg="StartContainer for \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\" returns successfully" Feb 13 20:04:33.432585 kubelet[2667]: I0213 20:04:33.432416 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-7mq6j" podStartSLOduration=2.218961981 podStartE2EDuration="10.432374421s" podCreationTimestamp="2025-02-13 20:04:23 +0000 UTC" firstStartedPulling="2025-02-13 20:04:24.255443039 +0000 UTC m=+6.320558090" lastFinishedPulling="2025-02-13 20:04:32.468855479 +0000 UTC m=+14.533970530" observedRunningTime="2025-02-13 20:04:33.310256094 +0000 UTC m=+15.375371145" watchObservedRunningTime="2025-02-13 20:04:33.432374421 +0000 UTC m=+15.497489472" Feb 13 20:04:33.599085 kubelet[2667]: I0213 20:04:33.597608 2667 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 20:04:33.670814 systemd[1]: Created slice kubepods-burstable-pod518217a1_afb8_4c61_85ce_3a7d1220713f.slice - libcontainer container kubepods-burstable-pod518217a1_afb8_4c61_85ce_3a7d1220713f.slice. Feb 13 20:04:33.688106 systemd[1]: Created slice kubepods-burstable-pod6af4ecf5_3c21_4ee2_8da4_1131e40adc7d.slice - libcontainer container kubepods-burstable-pod6af4ecf5_3c21_4ee2_8da4_1131e40adc7d.slice. Feb 13 20:04:33.831926 kubelet[2667]: I0213 20:04:33.831665 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/518217a1-afb8-4c61-85ce-3a7d1220713f-config-volume\") pod \"coredns-6f6b679f8f-p6jlm\" (UID: \"518217a1-afb8-4c61-85ce-3a7d1220713f\") " pod="kube-system/coredns-6f6b679f8f-p6jlm" Feb 13 20:04:33.831926 kubelet[2667]: I0213 20:04:33.831768 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26h4w\" (UniqueName: \"kubernetes.io/projected/518217a1-afb8-4c61-85ce-3a7d1220713f-kube-api-access-26h4w\") pod \"coredns-6f6b679f8f-p6jlm\" (UID: \"518217a1-afb8-4c61-85ce-3a7d1220713f\") " pod="kube-system/coredns-6f6b679f8f-p6jlm" Feb 13 20:04:33.831926 kubelet[2667]: I0213 20:04:33.831791 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kml25\" (UniqueName: \"kubernetes.io/projected/6af4ecf5-3c21-4ee2-8da4-1131e40adc7d-kube-api-access-kml25\") pod \"coredns-6f6b679f8f-lpncs\" (UID: \"6af4ecf5-3c21-4ee2-8da4-1131e40adc7d\") " pod="kube-system/coredns-6f6b679f8f-lpncs" Feb 13 20:04:33.831926 kubelet[2667]: I0213 20:04:33.831812 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6af4ecf5-3c21-4ee2-8da4-1131e40adc7d-config-volume\") pod \"coredns-6f6b679f8f-lpncs\" (UID: \"6af4ecf5-3c21-4ee2-8da4-1131e40adc7d\") " pod="kube-system/coredns-6f6b679f8f-lpncs" Feb 13 20:04:33.982595 containerd[1465]: time="2025-02-13T20:04:33.981729166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p6jlm,Uid:518217a1-afb8-4c61-85ce-3a7d1220713f,Namespace:kube-system,Attempt:0,}" Feb 13 20:04:33.993781 containerd[1465]: time="2025-02-13T20:04:33.993735283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lpncs,Uid:6af4ecf5-3c21-4ee2-8da4-1131e40adc7d,Namespace:kube-system,Attempt:0,}" Feb 13 20:04:34.200652 kubelet[2667]: I0213 20:04:34.200577 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4hq92" podStartSLOduration=5.343509881 podStartE2EDuration="11.200556528s" podCreationTimestamp="2025-02-13 20:04:23 +0000 UTC" firstStartedPulling="2025-02-13 20:04:23.952361579 +0000 UTC m=+6.017476630" lastFinishedPulling="2025-02-13 20:04:29.809408226 +0000 UTC m=+11.874523277" observedRunningTime="2025-02-13 20:04:34.198961933 +0000 UTC m=+16.264076984" watchObservedRunningTime="2025-02-13 20:04:34.200556528 +0000 UTC m=+16.265671579" Feb 13 20:04:36.493708 systemd-networkd[1380]: cilium_host: Link UP Feb 13 20:04:36.493849 systemd-networkd[1380]: cilium_net: Link UP Feb 13 20:04:36.493978 systemd-networkd[1380]: cilium_net: Gained carrier Feb 13 20:04:36.497007 systemd-networkd[1380]: cilium_host: Gained carrier Feb 13 20:04:36.619122 systemd-networkd[1380]: cilium_vxlan: Link UP Feb 13 20:04:36.619336 systemd-networkd[1380]: cilium_vxlan: Gained carrier Feb 13 20:04:36.794324 systemd-networkd[1380]: cilium_net: Gained IPv6LL Feb 13 20:04:36.921106 kernel: NET: Registered PF_ALG protocol family Feb 13 20:04:37.091220 systemd-networkd[1380]: cilium_host: Gained IPv6LL Feb 13 20:04:37.648818 systemd-networkd[1380]: lxc_health: Link UP Feb 13 20:04:37.661643 systemd-networkd[1380]: lxc_health: Gained carrier Feb 13 20:04:38.083008 systemd-networkd[1380]: lxc8ed51d06d25f: Link UP Feb 13 20:04:38.094532 kernel: eth0: renamed from tmpb64ce Feb 13 20:04:38.097891 systemd-networkd[1380]: lxc2ee9c60145bc: Link UP Feb 13 20:04:38.104715 kernel: eth0: renamed from tmp0f3a5 Feb 13 20:04:38.103916 systemd-networkd[1380]: lxc8ed51d06d25f: Gained carrier Feb 13 20:04:38.110770 systemd-networkd[1380]: lxc2ee9c60145bc: Gained carrier Feb 13 20:04:38.434303 systemd-networkd[1380]: cilium_vxlan: Gained IPv6LL Feb 13 20:04:39.138379 systemd-networkd[1380]: lxc_health: Gained IPv6LL Feb 13 20:04:39.394843 systemd-networkd[1380]: lxc2ee9c60145bc: Gained IPv6LL Feb 13 20:04:39.586352 systemd-networkd[1380]: lxc8ed51d06d25f: Gained IPv6LL Feb 13 20:04:42.237069 containerd[1465]: time="2025-02-13T20:04:42.235177712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:04:42.237069 containerd[1465]: time="2025-02-13T20:04:42.235362479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:04:42.237069 containerd[1465]: time="2025-02-13T20:04:42.235481803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:42.237069 containerd[1465]: time="2025-02-13T20:04:42.236278192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:42.278618 containerd[1465]: time="2025-02-13T20:04:42.277530477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:04:42.280118 containerd[1465]: time="2025-02-13T20:04:42.279072533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:04:42.280118 containerd[1465]: time="2025-02-13T20:04:42.279111454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:42.280118 containerd[1465]: time="2025-02-13T20:04:42.279374183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:04:42.283287 systemd[1]: Started cri-containerd-0f3a5d4a35615493f55e9aa18a4cc043c2e8cd18e2731b873ab01071363f03b5.scope - libcontainer container 0f3a5d4a35615493f55e9aa18a4cc043c2e8cd18e2731b873ab01071363f03b5. Feb 13 20:04:42.319333 systemd[1]: Started cri-containerd-b64cef4b246574345de6d6c4f630492c0db14c181ce665baae603961401912f3.scope - libcontainer container b64cef4b246574345de6d6c4f630492c0db14c181ce665baae603961401912f3. Feb 13 20:04:42.368958 containerd[1465]: time="2025-02-13T20:04:42.366736329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lpncs,Uid:6af4ecf5-3c21-4ee2-8da4-1131e40adc7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f3a5d4a35615493f55e9aa18a4cc043c2e8cd18e2731b873ab01071363f03b5\"" Feb 13 20:04:42.381505 containerd[1465]: time="2025-02-13T20:04:42.381465020Z" level=info msg="CreateContainer within sandbox \"0f3a5d4a35615493f55e9aa18a4cc043c2e8cd18e2731b873ab01071363f03b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:04:42.397351 containerd[1465]: time="2025-02-13T20:04:42.397269269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-p6jlm,Uid:518217a1-afb8-4c61-85ce-3a7d1220713f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b64cef4b246574345de6d6c4f630492c0db14c181ce665baae603961401912f3\"" Feb 13 20:04:42.407865 containerd[1465]: time="2025-02-13T20:04:42.407805208Z" level=info msg="CreateContainer within sandbox \"b64cef4b246574345de6d6c4f630492c0db14c181ce665baae603961401912f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:04:42.411278 containerd[1465]: time="2025-02-13T20:04:42.409938405Z" level=info msg="CreateContainer within sandbox \"0f3a5d4a35615493f55e9aa18a4cc043c2e8cd18e2731b873ab01071363f03b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba94eacf37cbdba5d9d41a0eca79dd8bec72559a04411cbaba324f245554b8f1\"" Feb 13 20:04:42.424215 containerd[1465]: time="2025-02-13T20:04:42.423739822Z" level=info msg="StartContainer for \"ba94eacf37cbdba5d9d41a0eca79dd8bec72559a04411cbaba324f245554b8f1\"" Feb 13 20:04:42.431378 containerd[1465]: time="2025-02-13T20:04:42.431322935Z" level=info msg="CreateContainer within sandbox \"b64cef4b246574345de6d6c4f630492c0db14c181ce665baae603961401912f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2536d11d75d2a4c60ea615df02b77a0ac26b2c4eb85cbd8e43475fe1b416ba98\"" Feb 13 20:04:42.434736 containerd[1465]: time="2025-02-13T20:04:42.434693376Z" level=info msg="StartContainer for \"2536d11d75d2a4c60ea615df02b77a0ac26b2c4eb85cbd8e43475fe1b416ba98\"" Feb 13 20:04:42.470255 systemd[1]: Started cri-containerd-ba94eacf37cbdba5d9d41a0eca79dd8bec72559a04411cbaba324f245554b8f1.scope - libcontainer container ba94eacf37cbdba5d9d41a0eca79dd8bec72559a04411cbaba324f245554b8f1. Feb 13 20:04:42.484270 systemd[1]: Started cri-containerd-2536d11d75d2a4c60ea615df02b77a0ac26b2c4eb85cbd8e43475fe1b416ba98.scope - libcontainer container 2536d11d75d2a4c60ea615df02b77a0ac26b2c4eb85cbd8e43475fe1b416ba98. Feb 13 20:04:42.521130 containerd[1465]: time="2025-02-13T20:04:42.520846358Z" level=info msg="StartContainer for \"ba94eacf37cbdba5d9d41a0eca79dd8bec72559a04411cbaba324f245554b8f1\" returns successfully" Feb 13 20:04:42.530293 containerd[1465]: time="2025-02-13T20:04:42.529415467Z" level=info msg="StartContainer for \"2536d11d75d2a4c60ea615df02b77a0ac26b2c4eb85cbd8e43475fe1b416ba98\" returns successfully" Feb 13 20:04:43.222153 kubelet[2667]: I0213 20:04:43.221977 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lpncs" podStartSLOduration=20.221959223 podStartE2EDuration="20.221959223s" podCreationTimestamp="2025-02-13 20:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:04:43.219401927 +0000 UTC m=+25.284516978" watchObservedRunningTime="2025-02-13 20:04:43.221959223 +0000 UTC m=+25.287074274" Feb 13 20:04:43.267450 kubelet[2667]: I0213 20:04:43.266972 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-p6jlm" podStartSLOduration=20.266926751 podStartE2EDuration="20.266926751s" podCreationTimestamp="2025-02-13 20:04:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:04:43.238098909 +0000 UTC m=+25.303214000" watchObservedRunningTime="2025-02-13 20:04:43.266926751 +0000 UTC m=+25.332041802" Feb 13 20:04:45.155778 kubelet[2667]: I0213 20:04:45.155634 2667 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:06:13.328496 systemd[1]: Started sshd@7-49.13.3.212:22-194.0.234.38:49230.service - OpenSSH per-connection server daemon (194.0.234.38:49230). Feb 13 20:06:15.136936 sshd[4057]: Invalid user backups from 194.0.234.38 port 49230 Feb 13 20:06:15.503352 sshd[4057]: Connection closed by invalid user backups 194.0.234.38 port 49230 [preauth] Feb 13 20:06:15.507366 systemd[1]: sshd@7-49.13.3.212:22-194.0.234.38:49230.service: Deactivated successfully. Feb 13 20:08:03.134383 systemd[1]: Started sshd@8-49.13.3.212:22-134.209.22.126:56272.service - OpenSSH per-connection server daemon (134.209.22.126:56272). Feb 13 20:08:03.258595 sshd[4079]: Connection closed by authenticating user root 134.209.22.126 port 56272 [preauth] Feb 13 20:08:03.262239 systemd[1]: sshd@8-49.13.3.212:22-134.209.22.126:56272.service: Deactivated successfully. Feb 13 20:09:06.952775 systemd[1]: Started sshd@9-49.13.3.212:22-147.75.109.163:55686.service - OpenSSH per-connection server daemon (147.75.109.163:55686). Feb 13 20:09:07.940268 sshd[4091]: Accepted publickey for core from 147.75.109.163 port 55686 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:07.942526 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:07.952691 systemd-logind[1443]: New session 8 of user core. Feb 13 20:09:07.958287 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:09:08.718595 sshd[4091]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:08.723867 systemd[1]: sshd@9-49.13.3.212:22-147.75.109.163:55686.service: Deactivated successfully. Feb 13 20:09:08.726012 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:09:08.728084 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:09:08.729970 systemd-logind[1443]: Removed session 8. Feb 13 20:09:13.898549 systemd[1]: Started sshd@10-49.13.3.212:22-147.75.109.163:41502.service - OpenSSH per-connection server daemon (147.75.109.163:41502). Feb 13 20:09:14.875739 sshd[4105]: Accepted publickey for core from 147.75.109.163 port 41502 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:14.878016 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:14.882896 systemd-logind[1443]: New session 9 of user core. Feb 13 20:09:14.891431 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:09:15.637364 sshd[4105]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:15.644138 systemd[1]: sshd@10-49.13.3.212:22-147.75.109.163:41502.service: Deactivated successfully. Feb 13 20:09:15.646944 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:09:15.648799 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:09:15.649964 systemd-logind[1443]: Removed session 9. Feb 13 20:09:20.817482 systemd[1]: Started sshd@11-49.13.3.212:22-147.75.109.163:35380.service - OpenSSH per-connection server daemon (147.75.109.163:35380). Feb 13 20:09:21.799647 sshd[4122]: Accepted publickey for core from 147.75.109.163 port 35380 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:21.801598 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:21.806383 systemd-logind[1443]: New session 10 of user core. Feb 13 20:09:21.811218 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:09:22.557723 sshd[4122]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:22.561899 systemd[1]: sshd@11-49.13.3.212:22-147.75.109.163:35380.service: Deactivated successfully. Feb 13 20:09:22.564851 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:09:22.568257 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:09:22.569478 systemd-logind[1443]: Removed session 10. Feb 13 20:09:22.737365 systemd[1]: Started sshd@12-49.13.3.212:22-147.75.109.163:35386.service - OpenSSH per-connection server daemon (147.75.109.163:35386). Feb 13 20:09:23.729564 sshd[4135]: Accepted publickey for core from 147.75.109.163 port 35386 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:23.732340 sshd[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:23.737948 systemd-logind[1443]: New session 11 of user core. Feb 13 20:09:23.750785 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:09:24.536676 sshd[4135]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:24.540852 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:09:24.541226 systemd[1]: sshd@12-49.13.3.212:22-147.75.109.163:35386.service: Deactivated successfully. Feb 13 20:09:24.543637 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:09:24.546839 systemd-logind[1443]: Removed session 11. Feb 13 20:09:24.712466 systemd[1]: Started sshd@13-49.13.3.212:22-147.75.109.163:35388.service - OpenSSH per-connection server daemon (147.75.109.163:35388). Feb 13 20:09:25.693209 sshd[4147]: Accepted publickey for core from 147.75.109.163 port 35388 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:25.695402 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:25.701173 systemd-logind[1443]: New session 12 of user core. Feb 13 20:09:25.706315 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:09:26.449413 sshd[4147]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:26.454199 systemd[1]: sshd@13-49.13.3.212:22-147.75.109.163:35388.service: Deactivated successfully. Feb 13 20:09:26.456703 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:09:26.459253 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:09:26.460654 systemd-logind[1443]: Removed session 12. Feb 13 20:09:31.628980 systemd[1]: Started sshd@14-49.13.3.212:22-147.75.109.163:57072.service - OpenSSH per-connection server daemon (147.75.109.163:57072). Feb 13 20:09:32.607505 sshd[4161]: Accepted publickey for core from 147.75.109.163 port 57072 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:32.609859 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:32.617514 systemd-logind[1443]: New session 13 of user core. Feb 13 20:09:32.631417 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:09:33.364805 sshd[4161]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:33.369665 systemd[1]: sshd@14-49.13.3.212:22-147.75.109.163:57072.service: Deactivated successfully. Feb 13 20:09:33.372061 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:09:33.374638 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:09:33.376007 systemd-logind[1443]: Removed session 13. Feb 13 20:09:33.536293 systemd[1]: Started sshd@15-49.13.3.212:22-147.75.109.163:57076.service - OpenSSH per-connection server daemon (147.75.109.163:57076). Feb 13 20:09:34.526303 sshd[4174]: Accepted publickey for core from 147.75.109.163 port 57076 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:34.528243 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:34.534713 systemd-logind[1443]: New session 14 of user core. Feb 13 20:09:34.543465 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:09:35.327449 sshd[4174]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:35.333001 systemd[1]: sshd@15-49.13.3.212:22-147.75.109.163:57076.service: Deactivated successfully. Feb 13 20:09:35.337267 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:09:35.339022 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:09:35.340508 systemd-logind[1443]: Removed session 14. Feb 13 20:09:35.502349 systemd[1]: Started sshd@16-49.13.3.212:22-147.75.109.163:57084.service - OpenSSH per-connection server daemon (147.75.109.163:57084). Feb 13 20:09:36.488793 sshd[4184]: Accepted publickey for core from 147.75.109.163 port 57084 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:36.490471 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:36.496102 systemd-logind[1443]: New session 15 of user core. Feb 13 20:09:36.504387 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:09:38.937562 sshd[4184]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:38.943241 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:09:38.944186 systemd[1]: sshd@16-49.13.3.212:22-147.75.109.163:57084.service: Deactivated successfully. Feb 13 20:09:38.947654 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:09:38.950650 systemd-logind[1443]: Removed session 15. Feb 13 20:09:39.110446 systemd[1]: Started sshd@17-49.13.3.212:22-147.75.109.163:57092.service - OpenSSH per-connection server daemon (147.75.109.163:57092). Feb 13 20:09:40.086274 sshd[4203]: Accepted publickey for core from 147.75.109.163 port 57092 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:40.087756 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:40.093161 systemd-logind[1443]: New session 16 of user core. Feb 13 20:09:40.098290 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:09:40.953375 sshd[4203]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:40.958271 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:09:40.959869 systemd[1]: sshd@17-49.13.3.212:22-147.75.109.163:57092.service: Deactivated successfully. Feb 13 20:09:40.962753 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:09:40.969768 systemd-logind[1443]: Removed session 16. Feb 13 20:09:41.121802 systemd[1]: Started sshd@18-49.13.3.212:22-147.75.109.163:46768.service - OpenSSH per-connection server daemon (147.75.109.163:46768). Feb 13 20:09:42.116924 sshd[4215]: Accepted publickey for core from 147.75.109.163 port 46768 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:42.119428 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:42.124485 systemd-logind[1443]: New session 17 of user core. Feb 13 20:09:42.131180 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:09:42.878542 sshd[4215]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:42.882513 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:09:42.883775 systemd[1]: sshd@18-49.13.3.212:22-147.75.109.163:46768.service: Deactivated successfully. Feb 13 20:09:42.887421 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:09:42.889336 systemd-logind[1443]: Removed session 17. Feb 13 20:09:48.054361 systemd[1]: Started sshd@19-49.13.3.212:22-147.75.109.163:46782.service - OpenSSH per-connection server daemon (147.75.109.163:46782). Feb 13 20:09:49.038784 sshd[4231]: Accepted publickey for core from 147.75.109.163 port 46782 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:49.039556 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:49.044673 systemd-logind[1443]: New session 18 of user core. Feb 13 20:09:49.054321 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:09:49.782452 sshd[4231]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:49.787850 systemd[1]: sshd@19-49.13.3.212:22-147.75.109.163:46782.service: Deactivated successfully. Feb 13 20:09:49.790795 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:09:49.791936 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:09:49.792989 systemd-logind[1443]: Removed session 18. Feb 13 20:09:54.955941 systemd[1]: Started sshd@20-49.13.3.212:22-147.75.109.163:56908.service - OpenSSH per-connection server daemon (147.75.109.163:56908). Feb 13 20:09:55.942613 sshd[4245]: Accepted publickey for core from 147.75.109.163 port 56908 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:55.945085 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:55.952420 systemd-logind[1443]: New session 19 of user core. Feb 13 20:09:55.958342 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:09:56.696186 sshd[4245]: pam_unix(sshd:session): session closed for user core Feb 13 20:09:56.701064 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:09:56.701312 systemd[1]: sshd@20-49.13.3.212:22-147.75.109.163:56908.service: Deactivated successfully. Feb 13 20:09:56.705888 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:09:56.708896 systemd-logind[1443]: Removed session 19. Feb 13 20:09:56.867750 systemd[1]: Started sshd@21-49.13.3.212:22-147.75.109.163:56922.service - OpenSSH per-connection server daemon (147.75.109.163:56922). Feb 13 20:09:57.863754 sshd[4257]: Accepted publickey for core from 147.75.109.163 port 56922 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:09:57.865781 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:09:57.873323 systemd-logind[1443]: New session 20 of user core. Feb 13 20:09:57.882321 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:10:00.200235 containerd[1465]: time="2025-02-13T20:10:00.198641900Z" level=info msg="StopContainer for \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\" with timeout 30 (s)" Feb 13 20:10:00.200235 containerd[1465]: time="2025-02-13T20:10:00.199364345Z" level=info msg="Stop container \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\" with signal terminated" Feb 13 20:10:00.221493 containerd[1465]: time="2025-02-13T20:10:00.221333066Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:10:00.222861 systemd[1]: cri-containerd-9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803.scope: Deactivated successfully. Feb 13 20:10:00.233593 containerd[1465]: time="2025-02-13T20:10:00.233552024Z" level=info msg="StopContainer for \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\" with timeout 2 (s)" Feb 13 20:10:00.234070 containerd[1465]: time="2025-02-13T20:10:00.234019532Z" level=info msg="Stop container \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\" with signal terminated" Feb 13 20:10:00.245786 systemd-networkd[1380]: lxc_health: Link DOWN Feb 13 20:10:00.245793 systemd-networkd[1380]: lxc_health: Lost carrier Feb 13 20:10:00.267840 systemd[1]: cri-containerd-d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef.scope: Deactivated successfully. Feb 13 20:10:00.268725 systemd[1]: cri-containerd-d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef.scope: Consumed 8.092s CPU time. Feb 13 20:10:00.275493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803-rootfs.mount: Deactivated successfully. Feb 13 20:10:00.289833 containerd[1465]: time="2025-02-13T20:10:00.289614097Z" level=info msg="shim disconnected" id=9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803 namespace=k8s.io Feb 13 20:10:00.289833 containerd[1465]: time="2025-02-13T20:10:00.289695982Z" level=warning msg="cleaning up after shim disconnected" id=9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803 namespace=k8s.io Feb 13 20:10:00.289833 containerd[1465]: time="2025-02-13T20:10:00.289706263Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:00.297109 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef-rootfs.mount: Deactivated successfully. Feb 13 20:10:00.304313 containerd[1465]: time="2025-02-13T20:10:00.303802896Z" level=info msg="shim disconnected" id=d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef namespace=k8s.io Feb 13 20:10:00.304313 containerd[1465]: time="2025-02-13T20:10:00.303858620Z" level=warning msg="cleaning up after shim disconnected" id=d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef namespace=k8s.io Feb 13 20:10:00.304313 containerd[1465]: time="2025-02-13T20:10:00.303870461Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:00.310245 containerd[1465]: time="2025-02-13T20:10:00.310194652Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:10:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:10:00.317529 containerd[1465]: time="2025-02-13T20:10:00.317477984Z" level=info msg="StopContainer for \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\" returns successfully" Feb 13 20:10:00.319090 containerd[1465]: time="2025-02-13T20:10:00.319043481Z" level=info msg="StopPodSandbox for \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\"" Feb 13 20:10:00.319090 containerd[1465]: time="2025-02-13T20:10:00.319105845Z" level=info msg="Container to stop \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:10:00.324674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e-shm.mount: Deactivated successfully. Feb 13 20:10:00.336542 systemd[1]: cri-containerd-7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e.scope: Deactivated successfully. Feb 13 20:10:00.344343 containerd[1465]: time="2025-02-13T20:10:00.344214800Z" level=info msg="StopContainer for \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\" returns successfully" Feb 13 20:10:00.345693 containerd[1465]: time="2025-02-13T20:10:00.345351791Z" level=info msg="StopPodSandbox for \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\"" Feb 13 20:10:00.345693 containerd[1465]: time="2025-02-13T20:10:00.345524882Z" level=info msg="Container to stop \"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:10:00.345693 containerd[1465]: time="2025-02-13T20:10:00.345549363Z" level=info msg="Container to stop \"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:10:00.345693 containerd[1465]: time="2025-02-13T20:10:00.345559964Z" level=info msg="Container to stop \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:10:00.345693 containerd[1465]: time="2025-02-13T20:10:00.345572884Z" level=info msg="Container to stop \"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:10:00.345693 containerd[1465]: time="2025-02-13T20:10:00.345582445Z" level=info msg="Container to stop \"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:10:00.349417 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa-shm.mount: Deactivated successfully. Feb 13 20:10:00.359238 systemd[1]: cri-containerd-061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa.scope: Deactivated successfully. Feb 13 20:10:00.386359 containerd[1465]: time="2025-02-13T20:10:00.386163160Z" level=info msg="shim disconnected" id=7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e namespace=k8s.io Feb 13 20:10:00.387833 containerd[1465]: time="2025-02-13T20:10:00.387660892Z" level=warning msg="cleaning up after shim disconnected" id=7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e namespace=k8s.io Feb 13 20:10:00.387833 containerd[1465]: time="2025-02-13T20:10:00.387698775Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:00.388392 containerd[1465]: time="2025-02-13T20:10:00.388233928Z" level=info msg="shim disconnected" id=061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa namespace=k8s.io Feb 13 20:10:00.388392 containerd[1465]: time="2025-02-13T20:10:00.388288171Z" level=warning msg="cleaning up after shim disconnected" id=061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa namespace=k8s.io Feb 13 20:10:00.388392 containerd[1465]: time="2025-02-13T20:10:00.388296812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:00.403804 containerd[1465]: time="2025-02-13T20:10:00.403641522Z" level=info msg="TearDown network for sandbox \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\" successfully" Feb 13 20:10:00.403804 containerd[1465]: time="2025-02-13T20:10:00.403680725Z" level=info msg="StopPodSandbox for \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\" returns successfully" Feb 13 20:10:00.407824 containerd[1465]: time="2025-02-13T20:10:00.407780619Z" level=info msg="TearDown network for sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" successfully" Feb 13 20:10:00.407824 containerd[1465]: time="2025-02-13T20:10:00.407817981Z" level=info msg="StopPodSandbox for \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" returns successfully" Feb 13 20:10:00.530154 kubelet[2667]: I0213 20:10:00.528891 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-hostproc\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530154 kubelet[2667]: I0213 20:10:00.528961 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-etc-cni-netd\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530154 kubelet[2667]: I0213 20:10:00.529088 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fc588af-6e71-4d0e-b042-cd18a190feef-cilium-config-path\") pod \"3fc588af-6e71-4d0e-b042-cd18a190feef\" (UID: \"3fc588af-6e71-4d0e-b042-cd18a190feef\") " Feb 13 20:10:00.530154 kubelet[2667]: I0213 20:10:00.529133 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-xtables-lock\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530154 kubelet[2667]: I0213 20:10:00.529160 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-bpf-maps\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530154 kubelet[2667]: I0213 20:10:00.529188 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-host-proc-sys-kernel\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530656 kubelet[2667]: I0213 20:10:00.529216 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-host-proc-sys-net\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530656 kubelet[2667]: I0213 20:10:00.529241 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-lib-modules\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530656 kubelet[2667]: I0213 20:10:00.529273 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f913e46-84c7-4b14-80ff-24e7ab8059c1-clustermesh-secrets\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530656 kubelet[2667]: I0213 20:10:00.529306 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fddpt\" (UniqueName: \"kubernetes.io/projected/3fc588af-6e71-4d0e-b042-cd18a190feef-kube-api-access-fddpt\") pod \"3fc588af-6e71-4d0e-b042-cd18a190feef\" (UID: \"3fc588af-6e71-4d0e-b042-cd18a190feef\") " Feb 13 20:10:00.530656 kubelet[2667]: I0213 20:10:00.529337 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-config-path\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530656 kubelet[2667]: I0213 20:10:00.529364 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-cgroup\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530803 kubelet[2667]: I0213 20:10:00.529391 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cni-path\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530803 kubelet[2667]: I0213 20:10:00.529418 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-run\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530803 kubelet[2667]: I0213 20:10:00.529448 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsvwz\" (UniqueName: \"kubernetes.io/projected/0f913e46-84c7-4b14-80ff-24e7ab8059c1-kube-api-access-zsvwz\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530803 kubelet[2667]: I0213 20:10:00.529482 2667 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f913e46-84c7-4b14-80ff-24e7ab8059c1-hubble-tls\") pod \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\" (UID: \"0f913e46-84c7-4b14-80ff-24e7ab8059c1\") " Feb 13 20:10:00.530803 kubelet[2667]: I0213 20:10:00.530129 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:00.530803 kubelet[2667]: I0213 20:10:00.530210 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-hostproc" (OuterVolumeSpecName: "hostproc") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:00.530943 kubelet[2667]: I0213 20:10:00.530243 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:00.535081 kubelet[2667]: I0213 20:10:00.534809 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fc588af-6e71-4d0e-b042-cd18a190feef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3fc588af-6e71-4d0e-b042-cd18a190feef" (UID: "3fc588af-6e71-4d0e-b042-cd18a190feef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:10:00.535081 kubelet[2667]: I0213 20:10:00.534885 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:00.535081 kubelet[2667]: I0213 20:10:00.534905 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:00.535081 kubelet[2667]: I0213 20:10:00.534920 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:00.535081 kubelet[2667]: I0213 20:10:00.534935 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:00.535778 kubelet[2667]: I0213 20:10:00.535726 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f913e46-84c7-4b14-80ff-24e7ab8059c1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:10:00.535861 kubelet[2667]: I0213 20:10:00.535792 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:00.537225 kubelet[2667]: I0213 20:10:00.537187 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cni-path" (OuterVolumeSpecName: "cni-path") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:00.538229 kubelet[2667]: I0213 20:10:00.537867 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:10:00.539813 kubelet[2667]: I0213 20:10:00.539765 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f913e46-84c7-4b14-80ff-24e7ab8059c1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:10:00.541318 kubelet[2667]: I0213 20:10:00.541277 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fc588af-6e71-4d0e-b042-cd18a190feef-kube-api-access-fddpt" (OuterVolumeSpecName: "kube-api-access-fddpt") pod "3fc588af-6e71-4d0e-b042-cd18a190feef" (UID: "3fc588af-6e71-4d0e-b042-cd18a190feef"). InnerVolumeSpecName "kube-api-access-fddpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:10:00.541937 kubelet[2667]: I0213 20:10:00.541792 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:10:00.542230 kubelet[2667]: I0213 20:10:00.542203 2667 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f913e46-84c7-4b14-80ff-24e7ab8059c1-kube-api-access-zsvwz" (OuterVolumeSpecName: "kube-api-access-zsvwz") pod "0f913e46-84c7-4b14-80ff-24e7ab8059c1" (UID: "0f913e46-84c7-4b14-80ff-24e7ab8059c1"). InnerVolumeSpecName "kube-api-access-zsvwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:10:00.630518 kubelet[2667]: I0213 20:10:00.630457 2667 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-host-proc-sys-net\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.630955 kubelet[2667]: I0213 20:10:00.630731 2667 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-lib-modules\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.630955 kubelet[2667]: I0213 20:10:00.630757 2667 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f913e46-84c7-4b14-80ff-24e7ab8059c1-clustermesh-secrets\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.630955 kubelet[2667]: I0213 20:10:00.630772 2667 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fddpt\" (UniqueName: \"kubernetes.io/projected/3fc588af-6e71-4d0e-b042-cd18a190feef-kube-api-access-fddpt\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.630955 kubelet[2667]: I0213 20:10:00.630811 2667 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-config-path\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.630955 kubelet[2667]: I0213 20:10:00.630835 2667 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-cgroup\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.630955 kubelet[2667]: I0213 20:10:00.630850 2667 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cni-path\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.630955 kubelet[2667]: I0213 20:10:00.630862 2667 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-cilium-run\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.630955 kubelet[2667]: I0213 20:10:00.630890 2667 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-zsvwz\" (UniqueName: \"kubernetes.io/projected/0f913e46-84c7-4b14-80ff-24e7ab8059c1-kube-api-access-zsvwz\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.631364 kubelet[2667]: I0213 20:10:00.630903 2667 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f913e46-84c7-4b14-80ff-24e7ab8059c1-hubble-tls\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.631364 kubelet[2667]: I0213 20:10:00.630917 2667 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-hostproc\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.631364 kubelet[2667]: I0213 20:10:00.630930 2667 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-etc-cni-netd\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.631664 kubelet[2667]: I0213 20:10:00.630943 2667 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3fc588af-6e71-4d0e-b042-cd18a190feef-cilium-config-path\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.631664 kubelet[2667]: I0213 20:10:00.631533 2667 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-xtables-lock\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.631664 kubelet[2667]: I0213 20:10:00.631604 2667 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-bpf-maps\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:00.631664 kubelet[2667]: I0213 20:10:00.631622 2667 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f913e46-84c7-4b14-80ff-24e7ab8059c1-host-proc-sys-kernel\") on node \"ci-4081-3-1-1-94e317dfd2\" DevicePath \"\"" Feb 13 20:10:01.023348 kubelet[2667]: I0213 20:10:01.021661 2667 scope.go:117] "RemoveContainer" containerID="d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef" Feb 13 20:10:01.025655 containerd[1465]: time="2025-02-13T20:10:01.025063908Z" level=info msg="RemoveContainer for \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\"" Feb 13 20:10:01.028910 systemd[1]: Removed slice kubepods-burstable-pod0f913e46_84c7_4b14_80ff_24e7ab8059c1.slice - libcontainer container kubepods-burstable-pod0f913e46_84c7_4b14_80ff_24e7ab8059c1.slice. Feb 13 20:10:01.029050 systemd[1]: kubepods-burstable-pod0f913e46_84c7_4b14_80ff_24e7ab8059c1.slice: Consumed 8.183s CPU time. Feb 13 20:10:01.034181 containerd[1465]: time="2025-02-13T20:10:01.033444468Z" level=info msg="RemoveContainer for \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\" returns successfully" Feb 13 20:10:01.039000 systemd[1]: Removed slice kubepods-besteffort-pod3fc588af_6e71_4d0e_b042_cd18a190feef.slice - libcontainer container kubepods-besteffort-pod3fc588af_6e71_4d0e_b042_cd18a190feef.slice. Feb 13 20:10:01.043383 kubelet[2667]: I0213 20:10:01.043346 2667 scope.go:117] "RemoveContainer" containerID="d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b" Feb 13 20:10:01.050309 containerd[1465]: time="2025-02-13T20:10:01.050238989Z" level=info msg="RemoveContainer for \"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b\"" Feb 13 20:10:01.054168 containerd[1465]: time="2025-02-13T20:10:01.054123630Z" level=info msg="RemoveContainer for \"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b\" returns successfully" Feb 13 20:10:01.060782 kubelet[2667]: I0213 20:10:01.057396 2667 scope.go:117] "RemoveContainer" containerID="48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438" Feb 13 20:10:01.061531 containerd[1465]: time="2025-02-13T20:10:01.061468206Z" level=info msg="RemoveContainer for \"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438\"" Feb 13 20:10:01.065218 containerd[1465]: time="2025-02-13T20:10:01.065177076Z" level=info msg="RemoveContainer for \"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438\" returns successfully" Feb 13 20:10:01.066810 kubelet[2667]: I0213 20:10:01.066783 2667 scope.go:117] "RemoveContainer" containerID="1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e" Feb 13 20:10:01.079524 containerd[1465]: time="2025-02-13T20:10:01.077871103Z" level=info msg="RemoveContainer for \"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e\"" Feb 13 20:10:01.084388 containerd[1465]: time="2025-02-13T20:10:01.084320343Z" level=info msg="RemoveContainer for \"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e\" returns successfully" Feb 13 20:10:01.084613 kubelet[2667]: I0213 20:10:01.084589 2667 scope.go:117] "RemoveContainer" containerID="4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54" Feb 13 20:10:01.086442 containerd[1465]: time="2025-02-13T20:10:01.086377511Z" level=info msg="RemoveContainer for \"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54\"" Feb 13 20:10:01.093578 containerd[1465]: time="2025-02-13T20:10:01.093373025Z" level=info msg="RemoveContainer for \"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54\" returns successfully" Feb 13 20:10:01.094234 kubelet[2667]: I0213 20:10:01.094200 2667 scope.go:117] "RemoveContainer" containerID="d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef" Feb 13 20:10:01.096306 containerd[1465]: time="2025-02-13T20:10:01.096239922Z" level=error msg="ContainerStatus for \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\": not found" Feb 13 20:10:01.097468 kubelet[2667]: E0213 20:10:01.097156 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\": not found" containerID="d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef" Feb 13 20:10:01.097468 kubelet[2667]: I0213 20:10:01.097196 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef"} err="failed to get container status \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\": rpc error: code = NotFound desc = an error occurred when try to find container \"d00c9bf47b6ec736477a402bccc58513168337a4522868f62f13e16055cfaeef\": not found" Feb 13 20:10:01.097468 kubelet[2667]: I0213 20:10:01.097275 2667 scope.go:117] "RemoveContainer" containerID="d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b" Feb 13 20:10:01.097643 containerd[1465]: time="2025-02-13T20:10:01.097550364Z" level=error msg="ContainerStatus for \"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b\": not found" Feb 13 20:10:01.097771 kubelet[2667]: E0213 20:10:01.097736 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b\": not found" containerID="d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b" Feb 13 20:10:01.097815 kubelet[2667]: I0213 20:10:01.097774 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b"} err="failed to get container status \"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d74caf95291837d7c7c7b83046f95b4a9cbb523443f4a11a3be954a8d94f747b\": not found" Feb 13 20:10:01.097815 kubelet[2667]: I0213 20:10:01.097804 2667 scope.go:117] "RemoveContainer" containerID="48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438" Feb 13 20:10:01.098144 containerd[1465]: time="2025-02-13T20:10:01.098097998Z" level=error msg="ContainerStatus for \"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438\": not found" Feb 13 20:10:01.098341 kubelet[2667]: E0213 20:10:01.098304 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438\": not found" containerID="48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438" Feb 13 20:10:01.098341 kubelet[2667]: I0213 20:10:01.098347 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438"} err="failed to get container status \"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438\": rpc error: code = NotFound desc = an error occurred when try to find container \"48ce538b4898b3e576313feeb9f8b54ce29d12e8315e2f7d9812314430da4438\": not found" Feb 13 20:10:01.098480 kubelet[2667]: I0213 20:10:01.098367 2667 scope.go:117] "RemoveContainer" containerID="1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e" Feb 13 20:10:01.100585 containerd[1465]: time="2025-02-13T20:10:01.100267252Z" level=error msg="ContainerStatus for \"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e\": not found" Feb 13 20:10:01.100685 kubelet[2667]: E0213 20:10:01.100437 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e\": not found" containerID="1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e" Feb 13 20:10:01.100685 kubelet[2667]: I0213 20:10:01.100468 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e"} err="failed to get container status \"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1b30bcb5f1d8a946607a2f3ba4af71c514bceae94fa94a6b6ee376ad78e1ff1e\": not found" Feb 13 20:10:01.100685 kubelet[2667]: I0213 20:10:01.100489 2667 scope.go:117] "RemoveContainer" containerID="4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54" Feb 13 20:10:01.100984 containerd[1465]: time="2025-02-13T20:10:01.100903012Z" level=error msg="ContainerStatus for \"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54\": not found" Feb 13 20:10:01.101125 kubelet[2667]: E0213 20:10:01.101088 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54\": not found" containerID="4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54" Feb 13 20:10:01.101171 kubelet[2667]: I0213 20:10:01.101133 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54"} err="failed to get container status \"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a5af67ac447346cea1a9851102d4839971e8d3dd6ec3992c5bef3e463679c54\": not found" Feb 13 20:10:01.101171 kubelet[2667]: I0213 20:10:01.101155 2667 scope.go:117] "RemoveContainer" containerID="9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803" Feb 13 20:10:01.104388 containerd[1465]: time="2025-02-13T20:10:01.104337945Z" level=info msg="RemoveContainer for \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\"" Feb 13 20:10:01.112059 containerd[1465]: time="2025-02-13T20:10:01.110812706Z" level=info msg="RemoveContainer for \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\" returns successfully" Feb 13 20:10:01.112172 kubelet[2667]: I0213 20:10:01.111825 2667 scope.go:117] "RemoveContainer" containerID="9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803" Feb 13 20:10:01.112285 containerd[1465]: time="2025-02-13T20:10:01.112233834Z" level=error msg="ContainerStatus for \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\": not found" Feb 13 20:10:01.112437 kubelet[2667]: E0213 20:10:01.112407 2667 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\": not found" containerID="9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803" Feb 13 20:10:01.112495 kubelet[2667]: I0213 20:10:01.112443 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803"} err="failed to get container status \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\": rpc error: code = NotFound desc = an error occurred when try to find container \"9cd67c54a7c277db6552e675a576bb9c82c19152b560d6ced7a35d5841a9c803\": not found" Feb 13 20:10:01.199385 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e-rootfs.mount: Deactivated successfully. Feb 13 20:10:01.199514 systemd[1]: var-lib-kubelet-pods-3fc588af\x2d6e71\x2d4d0e\x2db042\x2dcd18a190feef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfddpt.mount: Deactivated successfully. Feb 13 20:10:01.199671 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa-rootfs.mount: Deactivated successfully. Feb 13 20:10:01.199745 systemd[1]: var-lib-kubelet-pods-0f913e46\x2d84c7\x2d4b14\x2d80ff\x2d24e7ab8059c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzsvwz.mount: Deactivated successfully. Feb 13 20:10:01.199816 systemd[1]: var-lib-kubelet-pods-0f913e46\x2d84c7\x2d4b14\x2d80ff\x2d24e7ab8059c1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 20:10:01.199887 systemd[1]: var-lib-kubelet-pods-0f913e46\x2d84c7\x2d4b14\x2d80ff\x2d24e7ab8059c1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 20:10:02.035488 kubelet[2667]: I0213 20:10:02.035345 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f913e46-84c7-4b14-80ff-24e7ab8059c1" path="/var/lib/kubelet/pods/0f913e46-84c7-4b14-80ff-24e7ab8059c1/volumes" Feb 13 20:10:02.037588 kubelet[2667]: I0213 20:10:02.037200 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fc588af-6e71-4d0e-b042-cd18a190feef" path="/var/lib/kubelet/pods/3fc588af-6e71-4d0e-b042-cd18a190feef/volumes" Feb 13 20:10:02.286333 sshd[4257]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:02.292464 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:10:02.293377 systemd[1]: sshd@21-49.13.3.212:22-147.75.109.163:56922.service: Deactivated successfully. Feb 13 20:10:02.297077 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:10:02.297382 systemd[1]: session-20.scope: Consumed 1.161s CPU time. Feb 13 20:10:02.298920 systemd-logind[1443]: Removed session 20. Feb 13 20:10:02.388804 systemd[1]: Started sshd@22-49.13.3.212:22-183.224.219.194:34052.service - OpenSSH per-connection server daemon (183.224.219.194:34052). Feb 13 20:10:02.401856 sshd[4424]: Connection closed by 183.224.219.194 port 34052 Feb 13 20:10:02.404240 systemd[1]: sshd@22-49.13.3.212:22-183.224.219.194:34052.service: Deactivated successfully. Feb 13 20:10:02.461372 systemd[1]: Started sshd@23-49.13.3.212:22-147.75.109.163:55660.service - OpenSSH per-connection server daemon (147.75.109.163:55660). Feb 13 20:10:02.648470 systemd[1]: Started sshd@24-49.13.3.212:22-183.224.219.194:36696.service - OpenSSH per-connection server daemon (183.224.219.194:36696). Feb 13 20:10:03.254422 kubelet[2667]: E0213 20:10:03.254202 2667 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:03.443937 sshd[4428]: Accepted publickey for core from 147.75.109.163 port 55660 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:10:03.445951 sshd[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:03.452177 systemd-logind[1443]: New session 21 of user core. Feb 13 20:10:03.460527 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:10:04.816547 kubelet[2667]: E0213 20:10:04.816491 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f913e46-84c7-4b14-80ff-24e7ab8059c1" containerName="mount-cgroup" Feb 13 20:10:04.816547 kubelet[2667]: E0213 20:10:04.816525 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3fc588af-6e71-4d0e-b042-cd18a190feef" containerName="cilium-operator" Feb 13 20:10:04.816547 kubelet[2667]: E0213 20:10:04.816533 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f913e46-84c7-4b14-80ff-24e7ab8059c1" containerName="cilium-agent" Feb 13 20:10:04.816547 kubelet[2667]: E0213 20:10:04.816540 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f913e46-84c7-4b14-80ff-24e7ab8059c1" containerName="apply-sysctl-overwrites" Feb 13 20:10:04.816547 kubelet[2667]: E0213 20:10:04.816549 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f913e46-84c7-4b14-80ff-24e7ab8059c1" containerName="mount-bpf-fs" Feb 13 20:10:04.816547 kubelet[2667]: E0213 20:10:04.816555 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f913e46-84c7-4b14-80ff-24e7ab8059c1" containerName="clean-cilium-state" Feb 13 20:10:04.817226 kubelet[2667]: I0213 20:10:04.816583 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fc588af-6e71-4d0e-b042-cd18a190feef" containerName="cilium-operator" Feb 13 20:10:04.817226 kubelet[2667]: I0213 20:10:04.816589 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f913e46-84c7-4b14-80ff-24e7ab8059c1" containerName="cilium-agent" Feb 13 20:10:04.828147 kubelet[2667]: W0213 20:10:04.828023 2667 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-1-1-94e317dfd2" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-1-1-94e317dfd2' and this object Feb 13 20:10:04.828147 kubelet[2667]: E0213 20:10:04.828094 2667 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4081-3-1-1-94e317dfd2\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-1-1-94e317dfd2' and this object" logger="UnhandledError" Feb 13 20:10:04.828616 systemd[1]: Created slice kubepods-burstable-pod05688e71_a8ff_4db8_820c_86420f9eef73.slice - libcontainer container kubepods-burstable-pod05688e71_a8ff_4db8_820c_86420f9eef73.slice. Feb 13 20:10:04.832946 kubelet[2667]: W0213 20:10:04.831541 2667 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-1-1-94e317dfd2" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-1-1-94e317dfd2' and this object Feb 13 20:10:04.832946 kubelet[2667]: E0213 20:10:04.831591 2667 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4081-3-1-1-94e317dfd2\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-1-1-94e317dfd2' and this object" logger="UnhandledError" Feb 13 20:10:04.942910 sshd[4428]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:04.948336 systemd[1]: sshd@23-49.13.3.212:22-147.75.109.163:55660.service: Deactivated successfully. Feb 13 20:10:04.950657 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:10:04.953520 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:10:04.955515 systemd-logind[1443]: Removed session 21. Feb 13 20:10:04.964869 kubelet[2667]: I0213 20:10:04.964769 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79z2c\" (UniqueName: \"kubernetes.io/projected/05688e71-a8ff-4db8-820c-86420f9eef73-kube-api-access-79z2c\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.964869 kubelet[2667]: I0213 20:10:04.964836 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05688e71-a8ff-4db8-820c-86420f9eef73-cilium-cgroup\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.964869 kubelet[2667]: I0213 20:10:04.964870 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05688e71-a8ff-4db8-820c-86420f9eef73-lib-modules\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965499 kubelet[2667]: I0213 20:10:04.964903 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05688e71-a8ff-4db8-820c-86420f9eef73-cilium-config-path\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965499 kubelet[2667]: I0213 20:10:04.964932 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05688e71-a8ff-4db8-820c-86420f9eef73-cilium-ipsec-secrets\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965499 kubelet[2667]: I0213 20:10:04.964972 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05688e71-a8ff-4db8-820c-86420f9eef73-bpf-maps\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965499 kubelet[2667]: I0213 20:10:04.964996 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05688e71-a8ff-4db8-820c-86420f9eef73-xtables-lock\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965499 kubelet[2667]: I0213 20:10:04.965045 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05688e71-a8ff-4db8-820c-86420f9eef73-host-proc-sys-net\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965499 kubelet[2667]: I0213 20:10:04.965072 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05688e71-a8ff-4db8-820c-86420f9eef73-cni-path\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965666 kubelet[2667]: I0213 20:10:04.965097 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05688e71-a8ff-4db8-820c-86420f9eef73-clustermesh-secrets\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965666 kubelet[2667]: I0213 20:10:04.965187 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05688e71-a8ff-4db8-820c-86420f9eef73-cilium-run\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965666 kubelet[2667]: I0213 20:10:04.965259 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05688e71-a8ff-4db8-820c-86420f9eef73-hostproc\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965666 kubelet[2667]: I0213 20:10:04.965302 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05688e71-a8ff-4db8-820c-86420f9eef73-hubble-tls\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965666 kubelet[2667]: I0213 20:10:04.965330 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05688e71-a8ff-4db8-820c-86420f9eef73-etc-cni-netd\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:04.965666 kubelet[2667]: I0213 20:10:04.965361 2667 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05688e71-a8ff-4db8-820c-86420f9eef73-host-proc-sys-kernel\") pod \"cilium-6r2x6\" (UID: \"05688e71-a8ff-4db8-820c-86420f9eef73\") " pod="kube-system/cilium-6r2x6" Feb 13 20:10:05.113477 systemd[1]: Started sshd@25-49.13.3.212:22-147.75.109.163:55676.service - OpenSSH per-connection server daemon (147.75.109.163:55676). Feb 13 20:10:05.758197 kubelet[2667]: I0213 20:10:05.758101 2667 setters.go:600] "Node became not ready" node="ci-4081-3-1-1-94e317dfd2" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T20:10:05Z","lastTransitionTime":"2025-02-13T20:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 20:10:06.070128 kubelet[2667]: E0213 20:10:06.068742 2667 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:10:06.070128 kubelet[2667]: E0213 20:10:06.069570 2667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/05688e71-a8ff-4db8-820c-86420f9eef73-cilium-config-path podName:05688e71-a8ff-4db8-820c-86420f9eef73 nodeName:}" failed. No retries permitted until 2025-02-13 20:10:06.568825977 +0000 UTC m=+348.633941028 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/05688e71-a8ff-4db8-820c-86420f9eef73-cilium-config-path") pod "cilium-6r2x6" (UID: "05688e71-a8ff-4db8-820c-86420f9eef73") : failed to sync configmap cache: timed out waiting for the condition Feb 13 20:10:06.087059 sshd[4445]: Accepted publickey for core from 147.75.109.163 port 55676 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:10:06.089121 sshd[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:06.097100 systemd-logind[1443]: New session 22 of user core. Feb 13 20:10:06.102300 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:10:06.637152 containerd[1465]: time="2025-02-13T20:10:06.637012532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6r2x6,Uid:05688e71-a8ff-4db8-820c-86420f9eef73,Namespace:kube-system,Attempt:0,}" Feb 13 20:10:06.660957 containerd[1465]: time="2025-02-13T20:10:06.660628324Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:10:06.660957 containerd[1465]: time="2025-02-13T20:10:06.660693168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:10:06.660957 containerd[1465]: time="2025-02-13T20:10:06.660704528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:06.660957 containerd[1465]: time="2025-02-13T20:10:06.660788494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:10:06.688226 systemd[1]: Started cri-containerd-f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5.scope - libcontainer container f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5. Feb 13 20:10:06.719233 containerd[1465]: time="2025-02-13T20:10:06.718887833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6r2x6,Uid:05688e71-a8ff-4db8-820c-86420f9eef73,Namespace:kube-system,Attempt:0,} returns sandbox id \"f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5\"" Feb 13 20:10:06.723786 containerd[1465]: time="2025-02-13T20:10:06.723527482Z" level=info msg="CreateContainer within sandbox \"f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:10:06.740725 containerd[1465]: time="2025-02-13T20:10:06.740662269Z" level=info msg="CreateContainer within sandbox \"f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b33ae34e0bc3ec51822d5978ca6f12c53a2c705d50a2deb40b9fd37fd3e54b28\"" Feb 13 20:10:06.742219 containerd[1465]: time="2025-02-13T20:10:06.742169563Z" level=info msg="StartContainer for \"b33ae34e0bc3ec51822d5978ca6f12c53a2c705d50a2deb40b9fd37fd3e54b28\"" Feb 13 20:10:06.760572 sshd[4445]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:06.768615 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:10:06.770072 systemd[1]: sshd@25-49.13.3.212:22-147.75.109.163:55676.service: Deactivated successfully. Feb 13 20:10:06.775897 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:10:06.781924 systemd-logind[1443]: Removed session 22. Feb 13 20:10:06.787275 systemd[1]: Started cri-containerd-b33ae34e0bc3ec51822d5978ca6f12c53a2c705d50a2deb40b9fd37fd3e54b28.scope - libcontainer container b33ae34e0bc3ec51822d5978ca6f12c53a2c705d50a2deb40b9fd37fd3e54b28. Feb 13 20:10:06.817604 containerd[1465]: time="2025-02-13T20:10:06.817550139Z" level=info msg="StartContainer for \"b33ae34e0bc3ec51822d5978ca6f12c53a2c705d50a2deb40b9fd37fd3e54b28\" returns successfully" Feb 13 20:10:06.831470 systemd[1]: cri-containerd-b33ae34e0bc3ec51822d5978ca6f12c53a2c705d50a2deb40b9fd37fd3e54b28.scope: Deactivated successfully. Feb 13 20:10:06.873947 containerd[1465]: time="2025-02-13T20:10:06.873525426Z" level=info msg="shim disconnected" id=b33ae34e0bc3ec51822d5978ca6f12c53a2c705d50a2deb40b9fd37fd3e54b28 namespace=k8s.io Feb 13 20:10:06.873947 containerd[1465]: time="2025-02-13T20:10:06.873651954Z" level=warning msg="cleaning up after shim disconnected" id=b33ae34e0bc3ec51822d5978ca6f12c53a2c705d50a2deb40b9fd37fd3e54b28 namespace=k8s.io Feb 13 20:10:06.873947 containerd[1465]: time="2025-02-13T20:10:06.873672835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:06.933364 systemd[1]: Started sshd@26-49.13.3.212:22-147.75.109.163:55682.service - OpenSSH per-connection server daemon (147.75.109.163:55682). Feb 13 20:10:07.062176 containerd[1465]: time="2025-02-13T20:10:07.061961728Z" level=info msg="CreateContainer within sandbox \"f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:10:07.090908 containerd[1465]: time="2025-02-13T20:10:07.090747043Z" level=info msg="CreateContainer within sandbox \"f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4fbee3056f61981030587f91121204013dcae330de39f6a9f1bdf4f3c28b25a1\"" Feb 13 20:10:07.091964 containerd[1465]: time="2025-02-13T20:10:07.091813429Z" level=info msg="StartContainer for \"4fbee3056f61981030587f91121204013dcae330de39f6a9f1bdf4f3c28b25a1\"" Feb 13 20:10:07.122238 systemd[1]: Started cri-containerd-4fbee3056f61981030587f91121204013dcae330de39f6a9f1bdf4f3c28b25a1.scope - libcontainer container 4fbee3056f61981030587f91121204013dcae330de39f6a9f1bdf4f3c28b25a1. Feb 13 20:10:07.150879 containerd[1465]: time="2025-02-13T20:10:07.150814628Z" level=info msg="StartContainer for \"4fbee3056f61981030587f91121204013dcae330de39f6a9f1bdf4f3c28b25a1\" returns successfully" Feb 13 20:10:07.157564 systemd[1]: cri-containerd-4fbee3056f61981030587f91121204013dcae330de39f6a9f1bdf4f3c28b25a1.scope: Deactivated successfully. Feb 13 20:10:07.181966 containerd[1465]: time="2025-02-13T20:10:07.181726795Z" level=info msg="shim disconnected" id=4fbee3056f61981030587f91121204013dcae330de39f6a9f1bdf4f3c28b25a1 namespace=k8s.io Feb 13 20:10:07.181966 containerd[1465]: time="2025-02-13T20:10:07.181800840Z" level=warning msg="cleaning up after shim disconnected" id=4fbee3056f61981030587f91121204013dcae330de39f6a9f1bdf4f3c28b25a1 namespace=k8s.io Feb 13 20:10:07.181966 containerd[1465]: time="2025-02-13T20:10:07.181809880Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:07.908516 sshd[4560]: Accepted publickey for core from 147.75.109.163 port 55682 ssh2: RSA SHA256:LuJDJNKkel1FhOLM0q8DPGyx1TBQkJbblXKf4jZf034 Feb 13 20:10:07.911281 sshd[4560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:10:07.917087 systemd-logind[1443]: New session 23 of user core. Feb 13 20:10:07.925237 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:10:08.064317 containerd[1465]: time="2025-02-13T20:10:08.064270783Z" level=info msg="CreateContainer within sandbox \"f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:10:08.083745 containerd[1465]: time="2025-02-13T20:10:08.083538305Z" level=info msg="CreateContainer within sandbox \"f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a120768e28e4f47ab96749f7183b2fcb03d2e58563af8e16c68964cecf99e37e\"" Feb 13 20:10:08.085227 containerd[1465]: time="2025-02-13T20:10:08.085169967Z" level=info msg="StartContainer for \"a120768e28e4f47ab96749f7183b2fcb03d2e58563af8e16c68964cecf99e37e\"" Feb 13 20:10:08.117786 systemd[1]: run-containerd-runc-k8s.io-a120768e28e4f47ab96749f7183b2fcb03d2e58563af8e16c68964cecf99e37e-runc.42LAHk.mount: Deactivated successfully. Feb 13 20:10:08.125240 systemd[1]: Started cri-containerd-a120768e28e4f47ab96749f7183b2fcb03d2e58563af8e16c68964cecf99e37e.scope - libcontainer container a120768e28e4f47ab96749f7183b2fcb03d2e58563af8e16c68964cecf99e37e. Feb 13 20:10:08.159590 containerd[1465]: time="2025-02-13T20:10:08.158968252Z" level=info msg="StartContainer for \"a120768e28e4f47ab96749f7183b2fcb03d2e58563af8e16c68964cecf99e37e\" returns successfully" Feb 13 20:10:08.163080 systemd[1]: cri-containerd-a120768e28e4f47ab96749f7183b2fcb03d2e58563af8e16c68964cecf99e37e.scope: Deactivated successfully. Feb 13 20:10:08.191399 containerd[1465]: time="2025-02-13T20:10:08.191339792Z" level=info msg="shim disconnected" id=a120768e28e4f47ab96749f7183b2fcb03d2e58563af8e16c68964cecf99e37e namespace=k8s.io Feb 13 20:10:08.192192 containerd[1465]: time="2025-02-13T20:10:08.191891626Z" level=warning msg="cleaning up after shim disconnected" id=a120768e28e4f47ab96749f7183b2fcb03d2e58563af8e16c68964cecf99e37e namespace=k8s.io Feb 13 20:10:08.192192 containerd[1465]: time="2025-02-13T20:10:08.191916948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:08.204661 containerd[1465]: time="2025-02-13T20:10:08.204607540Z" level=warning msg="cleanup warnings time=\"2025-02-13T20:10:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 20:10:08.256303 kubelet[2667]: E0213 20:10:08.256207 2667 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:10:08.651713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a120768e28e4f47ab96749f7183b2fcb03d2e58563af8e16c68964cecf99e37e-rootfs.mount: Deactivated successfully. Feb 13 20:10:08.916285 systemd[1]: Started sshd@27-49.13.3.212:22-183.224.219.194:43622.service - OpenSSH per-connection server daemon (183.224.219.194:43622). Feb 13 20:10:09.076045 containerd[1465]: time="2025-02-13T20:10:09.075717220Z" level=info msg="CreateContainer within sandbox \"f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:10:09.099578 containerd[1465]: time="2025-02-13T20:10:09.097816761Z" level=info msg="CreateContainer within sandbox \"f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb21c96418c209731d4dfd3d3c9d31841723e10779e018e6c8125e909f7219b1\"" Feb 13 20:10:09.102518 containerd[1465]: time="2025-02-13T20:10:09.099994016Z" level=info msg="StartContainer for \"cb21c96418c209731d4dfd3d3c9d31841723e10779e018e6c8125e909f7219b1\"" Feb 13 20:10:09.138424 systemd[1]: Started cri-containerd-cb21c96418c209731d4dfd3d3c9d31841723e10779e018e6c8125e909f7219b1.scope - libcontainer container cb21c96418c209731d4dfd3d3c9d31841723e10779e018e6c8125e909f7219b1. Feb 13 20:10:09.170858 systemd[1]: cri-containerd-cb21c96418c209731d4dfd3d3c9d31841723e10779e018e6c8125e909f7219b1.scope: Deactivated successfully. Feb 13 20:10:09.175339 containerd[1465]: time="2025-02-13T20:10:09.174811929Z" level=info msg="StartContainer for \"cb21c96418c209731d4dfd3d3c9d31841723e10779e018e6c8125e909f7219b1\" returns successfully" Feb 13 20:10:09.210230 containerd[1465]: time="2025-02-13T20:10:09.210148296Z" level=info msg="shim disconnected" id=cb21c96418c209731d4dfd3d3c9d31841723e10779e018e6c8125e909f7219b1 namespace=k8s.io Feb 13 20:10:09.210230 containerd[1465]: time="2025-02-13T20:10:09.210218140Z" level=warning msg="cleaning up after shim disconnected" id=cb21c96418c209731d4dfd3d3c9d31841723e10779e018e6c8125e909f7219b1 namespace=k8s.io Feb 13 20:10:09.210230 containerd[1465]: time="2025-02-13T20:10:09.210227621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:10:09.650747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb21c96418c209731d4dfd3d3c9d31841723e10779e018e6c8125e909f7219b1-rootfs.mount: Deactivated successfully. Feb 13 20:10:10.081596 containerd[1465]: time="2025-02-13T20:10:10.081286943Z" level=info msg="CreateContainer within sandbox \"f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:10:10.099562 containerd[1465]: time="2025-02-13T20:10:10.099405355Z" level=info msg="CreateContainer within sandbox \"f89046d3b674da5b7691d33e41c0ac5d3bcbb09162ebcf3e84cc00ea21f285e5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"385b93cf2248b6f5a6bb214bb7670a80250690ee43ec747c4fb9e48b07f3d3ca\"" Feb 13 20:10:10.100357 containerd[1465]: time="2025-02-13T20:10:10.100327693Z" level=info msg="StartContainer for \"385b93cf2248b6f5a6bb214bb7670a80250690ee43ec747c4fb9e48b07f3d3ca\"" Feb 13 20:10:10.134299 systemd[1]: Started cri-containerd-385b93cf2248b6f5a6bb214bb7670a80250690ee43ec747c4fb9e48b07f3d3ca.scope - libcontainer container 385b93cf2248b6f5a6bb214bb7670a80250690ee43ec747c4fb9e48b07f3d3ca. Feb 13 20:10:10.173423 containerd[1465]: time="2025-02-13T20:10:10.173368338Z" level=info msg="StartContainer for \"385b93cf2248b6f5a6bb214bb7670a80250690ee43ec747c4fb9e48b07f3d3ca\" returns successfully" Feb 13 20:10:10.485133 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 20:10:11.110098 kubelet[2667]: I0213 20:10:11.110007 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6r2x6" podStartSLOduration=7.109981402 podStartE2EDuration="7.109981402s" podCreationTimestamp="2025-02-13 20:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:10:11.107951875 +0000 UTC m=+353.173066966" watchObservedRunningTime="2025-02-13 20:10:11.109981402 +0000 UTC m=+353.175096453" Feb 13 20:10:13.585569 systemd-networkd[1380]: lxc_health: Link UP Feb 13 20:10:13.598394 systemd-networkd[1380]: lxc_health: Gained carrier Feb 13 20:10:14.819251 systemd-networkd[1380]: lxc_health: Gained IPv6LL Feb 13 20:10:14.825471 systemd[1]: run-containerd-runc-k8s.io-385b93cf2248b6f5a6bb214bb7670a80250690ee43ec747c4fb9e48b07f3d3ca-runc.fEfG6y.mount: Deactivated successfully. Feb 13 20:10:15.175652 systemd[1]: Started sshd@28-49.13.3.212:22-183.224.219.194:33880.service - OpenSSH per-connection server daemon (183.224.219.194:33880). Feb 13 20:10:17.021453 systemd[1]: run-containerd-runc-k8s.io-385b93cf2248b6f5a6bb214bb7670a80250690ee43ec747c4fb9e48b07f3d3ca-runc.0ZeFma.mount: Deactivated successfully. Feb 13 20:10:18.075552 containerd[1465]: time="2025-02-13T20:10:18.075506777Z" level=info msg="StopPodSandbox for \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\"" Feb 13 20:10:18.076278 containerd[1465]: time="2025-02-13T20:10:18.075613291Z" level=info msg="TearDown network for sandbox \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\" successfully" Feb 13 20:10:18.076278 containerd[1465]: time="2025-02-13T20:10:18.075626210Z" level=info msg="StopPodSandbox for \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\" returns successfully" Feb 13 20:10:18.076278 containerd[1465]: time="2025-02-13T20:10:18.076130861Z" level=info msg="RemovePodSandbox for \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\"" Feb 13 20:10:18.076278 containerd[1465]: time="2025-02-13T20:10:18.076166459Z" level=info msg="Forcibly stopping sandbox \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\"" Feb 13 20:10:18.076278 containerd[1465]: time="2025-02-13T20:10:18.076223776Z" level=info msg="TearDown network for sandbox \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\" successfully" Feb 13 20:10:18.083089 containerd[1465]: time="2025-02-13T20:10:18.082952635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:18.083089 containerd[1465]: time="2025-02-13T20:10:18.083089107Z" level=info msg="RemovePodSandbox \"7f4ef21e8c247bce0807edf9857992721cca6295ab0d718d514862a3f19dbf7e\" returns successfully" Feb 13 20:10:18.085319 containerd[1465]: time="2025-02-13T20:10:18.085275183Z" level=info msg="StopPodSandbox for \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\"" Feb 13 20:10:18.085544 containerd[1465]: time="2025-02-13T20:10:18.085373938Z" level=info msg="TearDown network for sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" successfully" Feb 13 20:10:18.085544 containerd[1465]: time="2025-02-13T20:10:18.085385977Z" level=info msg="StopPodSandbox for \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" returns successfully" Feb 13 20:10:18.086108 containerd[1465]: time="2025-02-13T20:10:18.086059059Z" level=info msg="RemovePodSandbox for \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\"" Feb 13 20:10:18.086108 containerd[1465]: time="2025-02-13T20:10:18.086117935Z" level=info msg="Forcibly stopping sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\"" Feb 13 20:10:18.086355 containerd[1465]: time="2025-02-13T20:10:18.086175932Z" level=info msg="TearDown network for sandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" successfully" Feb 13 20:10:18.091564 containerd[1465]: time="2025-02-13T20:10:18.091507470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:10:18.091709 containerd[1465]: time="2025-02-13T20:10:18.091689580Z" level=info msg="RemovePodSandbox \"061821abf1ae7c1262cd3b64039d95ac1215acb7500a6d8ded29f46d2d00c3fa\" returns successfully" Feb 13 20:10:19.401168 sshd[4560]: pam_unix(sshd:session): session closed for user core Feb 13 20:10:19.406299 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:10:19.406643 systemd[1]: sshd@26-49.13.3.212:22-147.75.109.163:55682.service: Deactivated successfully. Feb 13 20:10:19.412049 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:10:19.417934 systemd-logind[1443]: Removed session 23. Feb 13 20:10:21.448708 systemd[1]: Started sshd@29-49.13.3.212:22-183.224.219.194:40802.service - OpenSSH per-connection server daemon (183.224.219.194:40802). Feb 13 20:10:27.704326 systemd[1]: Started sshd@30-49.13.3.212:22-183.224.219.194:55162.service - OpenSSH per-connection server daemon (183.224.219.194:55162). Feb 13 20:10:33.969510 systemd[1]: Started sshd@31-49.13.3.212:22-183.224.219.194:45864.service - OpenSSH per-connection server daemon (183.224.219.194:45864). Feb 13 20:10:40.229637 systemd[1]: Started sshd@32-49.13.3.212:22-183.224.219.194:37230.service - OpenSSH per-connection server daemon (183.224.219.194:37230).