Jul 12 00:10:33.916409 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:10:33.916434 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:10:33.916444 kernel: KASLR enabled Jul 12 00:10:33.916450 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jul 12 00:10:33.916464 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Jul 12 00:10:33.917595 kernel: random: crng init done Jul 12 00:10:33.917627 kernel: ACPI: Early table checksum verification disabled Jul 12 00:10:33.917634 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jul 12 00:10:33.917641 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:10:33.917652 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:33.917659 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:33.917665 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:33.917671 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:33.917677 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:33.917684 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:33.917693 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:33.917699 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:33.917705 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:10:33.917712 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jul 12 00:10:33.917718 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jul 12 00:10:33.917724 kernel: NUMA: Failed to initialise from firmware Jul 12 00:10:33.917731 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jul 12 00:10:33.917737 kernel: NUMA: NODE_DATA [mem 0x13966e800-0x139673fff] Jul 12 00:10:33.917743 kernel: Zone ranges: Jul 12 00:10:33.917749 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 12 00:10:33.917757 kernel: DMA32 empty Jul 12 00:10:33.917763 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jul 12 00:10:33.917769 kernel: Movable zone start for each node Jul 12 00:10:33.917776 kernel: Early memory node ranges Jul 12 00:10:33.917782 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Jul 12 00:10:33.917788 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jul 12 00:10:33.917794 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jul 12 00:10:33.917801 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jul 12 00:10:33.917807 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jul 12 00:10:33.917813 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jul 12 00:10:33.917819 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jul 12 00:10:33.917826 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jul 12 00:10:33.917833 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jul 12 00:10:33.917840 kernel: psci: probing for conduit method from ACPI. Jul 12 00:10:33.917846 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:10:33.917856 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:10:33.917862 kernel: psci: Trusted OS migration not required Jul 12 00:10:33.917869 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:10:33.917877 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:10:33.917884 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:10:33.917891 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:10:33.917898 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 12 00:10:33.917905 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:10:33.917911 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:10:33.917918 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:10:33.917925 kernel: CPU features: detected: Spectre-v4 Jul 12 00:10:33.917932 kernel: CPU features: detected: Spectre-BHB Jul 12 00:10:33.917939 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:10:33.917947 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:10:33.917954 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:10:33.917961 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:10:33.917967 kernel: alternatives: applying boot alternatives Jul 12 00:10:33.917975 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:10:33.917982 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:10:33.917989 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:10:33.917996 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:10:33.918003 kernel: Fallback order for Node 0: 0 Jul 12 00:10:33.918009 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jul 12 00:10:33.918016 kernel: Policy zone: Normal Jul 12 00:10:33.918024 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:10:33.918031 kernel: software IO TLB: area num 2. Jul 12 00:10:33.918038 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jul 12 00:10:33.918045 kernel: Memory: 3882804K/4096000K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 213196K reserved, 0K cma-reserved) Jul 12 00:10:33.918052 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 12 00:10:33.918059 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:10:33.918066 kernel: rcu: RCU event tracing is enabled. Jul 12 00:10:33.918073 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 12 00:10:33.918080 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:10:33.918087 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:10:33.918094 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:10:33.918102 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 12 00:10:33.918109 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:10:33.918116 kernel: GICv3: 256 SPIs implemented Jul 12 00:10:33.918123 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:10:33.918129 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:10:33.918136 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:10:33.918143 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:10:33.918149 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:10:33.918156 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:10:33.918163 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:10:33.918170 kernel: GICv3: using LPI property table @0x00000001000e0000 Jul 12 00:10:33.918177 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jul 12 00:10:33.918185 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:10:33.918192 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:10:33.918199 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:10:33.918206 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:10:33.918213 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:10:33.918219 kernel: Console: colour dummy device 80x25 Jul 12 00:10:33.918226 kernel: ACPI: Core revision 20230628 Jul 12 00:10:33.918234 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:10:33.918241 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:10:33.918248 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:10:33.918256 kernel: landlock: Up and running. Jul 12 00:10:33.918263 kernel: SELinux: Initializing. Jul 12 00:10:33.918270 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:10:33.918277 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:10:33.918284 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:10:33.918292 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jul 12 00:10:33.918299 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:10:33.918306 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:10:33.918313 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:10:33.918321 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:10:33.918328 kernel: Remapping and enabling EFI services. Jul 12 00:10:33.918335 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:10:33.918342 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:10:33.918349 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:10:33.918357 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jul 12 00:10:33.918364 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:10:33.918370 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:10:33.918377 kernel: smp: Brought up 1 node, 2 CPUs Jul 12 00:10:33.918384 kernel: SMP: Total of 2 processors activated. Jul 12 00:10:33.918393 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:10:33.918400 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:10:33.918412 kernel: CPU features: detected: Common not Private translations Jul 12 00:10:33.918421 kernel: CPU features: detected: CRC32 instructions Jul 12 00:10:33.918429 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:10:33.918436 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:10:33.918444 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:10:33.918451 kernel: CPU features: detected: Privileged Access Never Jul 12 00:10:33.918458 kernel: CPU features: detected: RAS Extension Support Jul 12 00:10:33.918468 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:10:33.920548 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:10:33.920569 kernel: alternatives: applying system-wide alternatives Jul 12 00:10:33.920577 kernel: devtmpfs: initialized Jul 12 00:10:33.920585 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:10:33.920593 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 12 00:10:33.920601 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:10:33.920615 kernel: SMBIOS 3.0.0 present. Jul 12 00:10:33.920622 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jul 12 00:10:33.920630 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:10:33.920637 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:10:33.920645 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:10:33.920652 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:10:33.920660 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:10:33.920667 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Jul 12 00:10:33.920674 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:10:33.920684 kernel: cpuidle: using governor menu Jul 12 00:10:33.920691 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:10:33.920698 kernel: ASID allocator initialised with 32768 entries Jul 12 00:10:33.920706 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:10:33.920714 kernel: Serial: AMBA PL011 UART driver Jul 12 00:10:33.920721 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:10:33.920728 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:10:33.920736 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:10:33.920743 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:10:33.920752 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:10:33.920760 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:10:33.920767 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:10:33.920774 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:10:33.920782 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:10:33.920789 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:10:33.920797 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:10:33.920804 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:10:33.920811 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:10:33.920820 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:10:33.920827 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:10:33.920835 kernel: ACPI: Interpreter enabled Jul 12 00:10:33.920842 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:10:33.920849 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:10:33.920857 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:10:33.920864 kernel: printk: console [ttyAMA0] enabled Jul 12 00:10:33.920871 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:10:33.921033 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:10:33.921109 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:10:33.921173 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:10:33.921238 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:10:33.921300 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:10:33.921310 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:10:33.921317 kernel: PCI host bridge to bus 0000:00 Jul 12 00:10:33.921389 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:10:33.921451 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:10:33.921645 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:10:33.921710 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:10:33.921794 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:10:33.921873 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jul 12 00:10:33.921940 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jul 12 00:10:33.922011 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jul 12 00:10:33.922085 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 12 00:10:33.922150 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jul 12 00:10:33.922230 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 12 00:10:33.922298 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jul 12 00:10:33.922369 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 12 00:10:33.922435 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jul 12 00:10:33.922577 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 12 00:10:33.922653 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jul 12 00:10:33.922728 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 12 00:10:33.922794 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jul 12 00:10:33.922865 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 12 00:10:33.922940 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jul 12 00:10:33.923013 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 12 00:10:33.923079 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jul 12 00:10:33.923153 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 12 00:10:33.923229 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jul 12 00:10:33.923303 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jul 12 00:10:33.923371 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jul 12 00:10:33.923453 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jul 12 00:10:33.925662 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Jul 12 00:10:33.925778 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jul 12 00:10:33.925851 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jul 12 00:10:33.925919 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:10:33.925987 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 12 00:10:33.926072 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 12 00:10:33.926141 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jul 12 00:10:33.926219 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jul 12 00:10:33.926287 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jul 12 00:10:33.926354 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jul 12 00:10:33.926432 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jul 12 00:10:33.928612 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jul 12 00:10:33.928728 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 12 00:10:33.928798 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jul 12 00:10:33.928875 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jul 12 00:10:33.928942 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jul 12 00:10:33.929008 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jul 12 00:10:33.929085 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jul 12 00:10:33.929156 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jul 12 00:10:33.929233 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jul 12 00:10:33.929299 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jul 12 00:10:33.929369 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jul 12 00:10:33.929435 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jul 12 00:10:33.929604 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jul 12 00:10:33.929699 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jul 12 00:10:33.929775 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jul 12 00:10:33.929846 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jul 12 00:10:33.929928 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 12 00:10:33.929999 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jul 12 00:10:33.930070 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jul 12 00:10:33.930151 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 12 00:10:33.930223 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jul 12 00:10:33.930300 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jul 12 00:10:33.930379 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 12 00:10:33.930455 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jul 12 00:10:33.930557 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jul 12 00:10:33.930640 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 12 00:10:33.930717 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jul 12 00:10:33.930787 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jul 12 00:10:33.930890 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 12 00:10:33.930976 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jul 12 00:10:33.931063 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jul 12 00:10:33.931149 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 12 00:10:33.931225 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jul 12 00:10:33.931304 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jul 12 00:10:33.931387 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 12 00:10:33.931464 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jul 12 00:10:33.932769 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jul 12 00:10:33.932852 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jul 12 00:10:33.932926 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:10:33.933005 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jul 12 00:10:33.933082 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:10:33.933160 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jul 12 00:10:33.933236 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:10:33.933316 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jul 12 00:10:33.933382 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:10:33.933472 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jul 12 00:10:33.933591 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:10:33.933665 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jul 12 00:10:33.933731 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:10:33.933805 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jul 12 00:10:33.933871 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:10:33.933940 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jul 12 00:10:33.934007 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:10:33.934072 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jul 12 00:10:33.934137 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:10:33.934209 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jul 12 00:10:33.934278 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jul 12 00:10:33.934344 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jul 12 00:10:33.934411 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 12 00:10:33.936570 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jul 12 00:10:33.936733 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 12 00:10:33.936809 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jul 12 00:10:33.936875 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 12 00:10:33.936945 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jul 12 00:10:33.937036 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 12 00:10:33.937129 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jul 12 00:10:33.937202 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 12 00:10:33.937272 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jul 12 00:10:33.937347 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 12 00:10:33.937420 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jul 12 00:10:33.937544 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 12 00:10:33.937619 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jul 12 00:10:33.937692 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 12 00:10:33.937762 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jul 12 00:10:33.937828 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jul 12 00:10:33.937899 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jul 12 00:10:33.937976 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jul 12 00:10:33.938047 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:10:33.938117 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jul 12 00:10:33.938189 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jul 12 00:10:33.938256 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 12 00:10:33.940578 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jul 12 00:10:33.940747 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:10:33.940832 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jul 12 00:10:33.940905 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jul 12 00:10:33.940982 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 12 00:10:33.941047 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jul 12 00:10:33.941113 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:10:33.941188 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jul 12 00:10:33.941257 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jul 12 00:10:33.941326 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jul 12 00:10:33.941391 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 12 00:10:33.941458 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jul 12 00:10:33.941577 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:10:33.941654 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jul 12 00:10:33.941721 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jul 12 00:10:33.941785 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 12 00:10:33.941847 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jul 12 00:10:33.941913 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:10:33.942251 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jul 12 00:10:33.942336 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jul 12 00:10:33.942400 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 12 00:10:33.942463 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jul 12 00:10:33.943719 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:10:33.943817 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jul 12 00:10:33.943888 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jul 12 00:10:33.943961 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jul 12 00:10:33.944026 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 12 00:10:33.944101 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jul 12 00:10:33.944166 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:10:33.944240 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jul 12 00:10:33.944308 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jul 12 00:10:33.944375 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jul 12 00:10:33.944450 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jul 12 00:10:33.944605 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 12 00:10:33.944678 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jul 12 00:10:33.944749 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:10:33.944820 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jul 12 00:10:33.944887 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 12 00:10:33.944956 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jul 12 00:10:33.945025 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:10:33.945097 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jul 12 00:10:33.945168 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jul 12 00:10:33.945242 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jul 12 00:10:33.945315 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:10:33.945385 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:10:33.945446 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:10:33.945592 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:10:33.945684 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 12 00:10:33.945746 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jul 12 00:10:33.945804 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jul 12 00:10:33.945877 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jul 12 00:10:33.945940 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jul 12 00:10:33.946001 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jul 12 00:10:33.946070 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jul 12 00:10:33.946132 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jul 12 00:10:33.946192 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jul 12 00:10:33.946264 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jul 12 00:10:33.946326 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jul 12 00:10:33.946393 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jul 12 00:10:33.946532 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jul 12 00:10:33.946642 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jul 12 00:10:33.946705 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jul 12 00:10:33.946775 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jul 12 00:10:33.946840 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jul 12 00:10:33.946900 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jul 12 00:10:33.946970 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jul 12 00:10:33.947034 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jul 12 00:10:33.947101 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jul 12 00:10:33.947170 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jul 12 00:10:33.947231 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jul 12 00:10:33.947290 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jul 12 00:10:33.947358 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jul 12 00:10:33.947419 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jul 12 00:10:33.947565 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jul 12 00:10:33.947584 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:10:33.947592 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:10:33.947600 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:10:33.947610 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:10:33.947618 kernel: iommu: Default domain type: Translated Jul 12 00:10:33.947626 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:10:33.947634 kernel: efivars: Registered efivars operations Jul 12 00:10:33.947641 kernel: vgaarb: loaded Jul 12 00:10:33.947649 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:10:33.947658 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:10:33.947667 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:10:33.947674 kernel: pnp: PnP ACPI init Jul 12 00:10:33.947765 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:10:33.947778 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:10:33.947786 kernel: NET: Registered PF_INET protocol family Jul 12 00:10:33.947794 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:10:33.947802 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:10:33.947812 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:10:33.947820 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:10:33.947828 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:10:33.947836 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:10:33.947844 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:10:33.947851 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:10:33.947860 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:10:33.947936 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jul 12 00:10:33.947948 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:10:33.947958 kernel: kvm [1]: HYP mode not available Jul 12 00:10:33.947965 kernel: Initialise system trusted keyrings Jul 12 00:10:33.947973 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:10:33.947981 kernel: Key type asymmetric registered Jul 12 00:10:33.947989 kernel: Asymmetric key parser 'x509' registered Jul 12 00:10:33.947997 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:10:33.948005 kernel: io scheduler mq-deadline registered Jul 12 00:10:33.948012 kernel: io scheduler kyber registered Jul 12 00:10:33.948020 kernel: io scheduler bfq registered Jul 12 00:10:33.948030 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 12 00:10:33.948098 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jul 12 00:10:33.948163 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jul 12 00:10:33.948226 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:10:33.948292 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jul 12 00:10:33.948358 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jul 12 00:10:33.948422 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:10:33.948520 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jul 12 00:10:33.948590 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jul 12 00:10:33.948654 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:10:33.948722 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jul 12 00:10:33.948787 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jul 12 00:10:33.948851 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:10:33.948923 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jul 12 00:10:33.948989 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jul 12 00:10:33.949053 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:10:33.949121 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jul 12 00:10:33.949188 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jul 12 00:10:33.949256 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:10:33.949328 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jul 12 00:10:33.949397 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jul 12 00:10:33.949585 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:10:33.949671 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jul 12 00:10:33.949753 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jul 12 00:10:33.949828 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:10:33.949839 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jul 12 00:10:33.949908 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jul 12 00:10:33.949975 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jul 12 00:10:33.950041 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 12 00:10:33.950051 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:10:33.950059 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:10:33.950067 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:10:33.950142 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jul 12 00:10:33.950217 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jul 12 00:10:33.950228 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:10:33.950239 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 12 00:10:33.950326 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jul 12 00:10:33.950339 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jul 12 00:10:33.950347 kernel: thunder_xcv, ver 1.0 Jul 12 00:10:33.950355 kernel: thunder_bgx, ver 1.0 Jul 12 00:10:33.950365 kernel: nicpf, ver 1.0 Jul 12 00:10:33.950373 kernel: nicvf, ver 1.0 Jul 12 00:10:33.950456 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:10:33.950636 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:10:33 UTC (1752279033) Jul 12 00:10:33.950650 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:10:33.950659 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:10:33.950666 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:10:33.950675 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:10:33.950686 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:10:33.950694 kernel: Segment Routing with IPv6 Jul 12 00:10:33.950702 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:10:33.950710 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:10:33.950717 kernel: Key type dns_resolver registered Jul 12 00:10:33.950725 kernel: registered taskstats version 1 Jul 12 00:10:33.950733 kernel: Loading compiled-in X.509 certificates Jul 12 00:10:33.950741 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:10:33.950749 kernel: Key type .fscrypt registered Jul 12 00:10:33.950756 kernel: Key type fscrypt-provisioning registered Jul 12 00:10:33.950766 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:10:33.950774 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:10:33.950782 kernel: ima: No architecture policies found Jul 12 00:10:33.950790 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:10:33.950797 kernel: clk: Disabling unused clocks Jul 12 00:10:33.950805 kernel: Freeing unused kernel memory: 39424K Jul 12 00:10:33.950813 kernel: Run /init as init process Jul 12 00:10:33.950820 kernel: with arguments: Jul 12 00:10:33.950829 kernel: /init Jul 12 00:10:33.950837 kernel: with environment: Jul 12 00:10:33.950844 kernel: HOME=/ Jul 12 00:10:33.950852 kernel: TERM=linux Jul 12 00:10:33.950859 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:10:33.950869 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:10:33.950879 systemd[1]: Detected virtualization kvm. Jul 12 00:10:33.950887 systemd[1]: Detected architecture arm64. Jul 12 00:10:33.950897 systemd[1]: Running in initrd. Jul 12 00:10:33.950904 systemd[1]: No hostname configured, using default hostname. Jul 12 00:10:33.950912 systemd[1]: Hostname set to . Jul 12 00:10:33.950921 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:10:33.950929 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:10:33.950937 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:10:33.950946 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:10:33.950955 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:10:33.950965 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:10:33.950973 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:10:33.950982 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:10:33.950991 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:10:33.951000 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:10:33.951008 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:10:33.951017 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:10:33.951031 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:10:33.951042 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:10:33.951051 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:10:33.951061 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:10:33.951071 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:10:33.951083 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:10:33.951093 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:10:33.951103 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:10:33.951115 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:10:33.951124 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:10:33.951132 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:10:33.951141 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:10:33.951151 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:10:33.951160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:10:33.951170 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:10:33.951180 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:10:33.951189 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:10:33.951201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:10:33.951210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:10:33.951218 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:10:33.951226 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:10:33.951259 systemd-journald[236]: Collecting audit messages is disabled. Jul 12 00:10:33.951282 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:10:33.951291 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:10:33.951300 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:10:33.951310 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:10:33.951318 kernel: Bridge firewalling registered Jul 12 00:10:33.951326 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:10:33.951334 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:10:33.951343 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:10:33.951353 systemd-journald[236]: Journal started Jul 12 00:10:33.951372 systemd-journald[236]: Runtime Journal (/run/log/journal/d508e184b3c54e9a901df22a6f81d6d6) is 8.0M, max 76.6M, 68.6M free. Jul 12 00:10:33.914355 systemd-modules-load[237]: Inserted module 'overlay' Jul 12 00:10:33.938428 systemd-modules-load[237]: Inserted module 'br_netfilter' Jul 12 00:10:33.954901 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:10:33.961164 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:10:33.964059 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:10:33.977728 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:10:33.987047 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:10:33.998820 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:10:34.001508 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:10:34.002322 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:10:34.003267 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:10:34.015740 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:10:34.019170 dracut-cmdline[267]: dracut-dracut-053 Jul 12 00:10:34.030119 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:10:34.051111 systemd-resolved[273]: Positive Trust Anchors: Jul 12 00:10:34.051126 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:10:34.051162 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:10:34.058897 systemd-resolved[273]: Defaulting to hostname 'linux'. Jul 12 00:10:34.060037 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:10:34.060754 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:10:34.137599 kernel: SCSI subsystem initialized Jul 12 00:10:34.142525 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:10:34.150553 kernel: iscsi: registered transport (tcp) Jul 12 00:10:34.165541 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:10:34.165603 kernel: QLogic iSCSI HBA Driver Jul 12 00:10:34.225518 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:10:34.231718 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:10:34.252751 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:10:34.252837 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:10:34.253503 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:10:34.305552 kernel: raid6: neonx8 gen() 15114 MB/s Jul 12 00:10:34.322557 kernel: raid6: neonx4 gen() 12870 MB/s Jul 12 00:10:34.339556 kernel: raid6: neonx2 gen() 13182 MB/s Jul 12 00:10:34.356548 kernel: raid6: neonx1 gen() 10450 MB/s Jul 12 00:10:34.373602 kernel: raid6: int64x8 gen() 6919 MB/s Jul 12 00:10:34.390552 kernel: raid6: int64x4 gen() 7313 MB/s Jul 12 00:10:34.407542 kernel: raid6: int64x2 gen() 6099 MB/s Jul 12 00:10:34.424545 kernel: raid6: int64x1 gen() 5033 MB/s Jul 12 00:10:34.424643 kernel: raid6: using algorithm neonx8 gen() 15114 MB/s Jul 12 00:10:34.441575 kernel: raid6: .... xor() 11885 MB/s, rmw enabled Jul 12 00:10:34.441670 kernel: raid6: using neon recovery algorithm Jul 12 00:10:34.446676 kernel: xor: measuring software checksum speed Jul 12 00:10:34.446762 kernel: 8regs : 19783 MB/sec Jul 12 00:10:34.447888 kernel: 32regs : 19664 MB/sec Jul 12 00:10:34.447939 kernel: arm64_neon : 27052 MB/sec Jul 12 00:10:34.447977 kernel: xor: using function: arm64_neon (27052 MB/sec) Jul 12 00:10:34.499558 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:10:34.515643 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:10:34.524793 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:10:34.554393 systemd-udevd[455]: Using default interface naming scheme 'v255'. Jul 12 00:10:34.557938 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:10:34.564722 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:10:34.583821 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Jul 12 00:10:34.627560 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:10:34.634753 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:10:34.686360 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:10:34.698167 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:10:34.719794 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:10:34.720963 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:10:34.722199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:10:34.724903 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:10:34.732877 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:10:34.758048 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:10:34.802544 kernel: scsi host0: Virtio SCSI HBA Jul 12 00:10:34.813684 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 12 00:10:34.813776 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jul 12 00:10:34.822056 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:10:34.822177 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:10:34.824093 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:10:34.826531 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:10:34.826696 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:10:34.832648 kernel: ACPI: bus type USB registered Jul 12 00:10:34.832710 kernel: usbcore: registered new interface driver usbfs Jul 12 00:10:34.829255 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:10:34.835045 kernel: usbcore: registered new interface driver hub Jul 12 00:10:34.836534 kernel: usbcore: registered new device driver usb Jul 12 00:10:34.836856 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:10:34.858723 kernel: sr 0:0:0:0: Power-on or device reset occurred Jul 12 00:10:34.858863 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:10:34.868944 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jul 12 00:10:34.869146 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 12 00:10:34.870509 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jul 12 00:10:34.872731 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:10:34.877541 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 12 00:10:34.877939 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jul 12 00:10:34.878027 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 12 00:10:34.882579 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jul 12 00:10:34.883300 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jul 12 00:10:34.883409 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jul 12 00:10:34.883592 kernel: hub 1-0:1.0: USB hub found Jul 12 00:10:34.885209 kernel: hub 1-0:1.0: 4 ports detected Jul 12 00:10:34.885713 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 12 00:10:34.885828 kernel: hub 2-0:1.0: USB hub found Jul 12 00:10:34.887195 kernel: hub 2-0:1.0: 4 ports detected Jul 12 00:10:34.902689 kernel: sd 0:0:0:1: Power-on or device reset occurred Jul 12 00:10:34.902905 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jul 12 00:10:34.903711 kernel: sd 0:0:0:1: [sda] Write Protect is off Jul 12 00:10:34.903864 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jul 12 00:10:34.903948 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 12 00:10:34.904636 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:10:34.911587 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:10:34.911639 kernel: GPT:17805311 != 80003071 Jul 12 00:10:34.911660 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:10:34.911673 kernel: GPT:17805311 != 80003071 Jul 12 00:10:34.913557 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:10:34.913602 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:10:34.913624 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jul 12 00:10:34.958565 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (508) Jul 12 00:10:34.959506 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (513) Jul 12 00:10:34.960060 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jul 12 00:10:34.965037 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jul 12 00:10:34.980774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 12 00:10:34.985174 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jul 12 00:10:34.986211 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jul 12 00:10:34.995683 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:10:35.004822 disk-uuid[572]: Primary Header is updated. Jul 12 00:10:35.004822 disk-uuid[572]: Secondary Entries is updated. Jul 12 00:10:35.004822 disk-uuid[572]: Secondary Header is updated. Jul 12 00:10:35.013519 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:10:35.018625 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:10:35.025557 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:10:35.124582 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 12 00:10:35.260104 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jul 12 00:10:35.260193 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jul 12 00:10:35.260572 kernel: usbcore: registered new interface driver usbhid Jul 12 00:10:35.260601 kernel: usbhid: USB HID core driver Jul 12 00:10:35.367518 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jul 12 00:10:35.497529 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jul 12 00:10:35.551521 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jul 12 00:10:36.028631 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 12 00:10:36.030410 disk-uuid[573]: The operation has completed successfully. Jul 12 00:10:36.082061 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:10:36.082193 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:10:36.104778 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:10:36.109070 sh[590]: Success Jul 12 00:10:36.120506 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:10:36.189752 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:10:36.192544 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:10:36.211751 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:10:36.230562 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:10:36.230635 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:10:36.230651 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:10:36.231718 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:10:36.231763 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:10:36.239537 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 12 00:10:36.242352 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:10:36.243247 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:10:36.248784 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:10:36.250103 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:10:36.269162 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:10:36.269250 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:10:36.269268 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:10:36.274539 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:10:36.274616 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:10:36.285951 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:10:36.287552 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:10:36.300281 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:10:36.308772 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:10:36.403944 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:10:36.412698 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:10:36.414068 ignition[697]: Ignition 2.19.0 Jul 12 00:10:36.414077 ignition[697]: Stage: fetch-offline Jul 12 00:10:36.414140 ignition[697]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:36.414149 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:10:36.414306 ignition[697]: parsed url from cmdline: "" Jul 12 00:10:36.414309 ignition[697]: no config URL provided Jul 12 00:10:36.414314 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:10:36.414321 ignition[697]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:10:36.414326 ignition[697]: failed to fetch config: resource requires networking Jul 12 00:10:36.415253 ignition[697]: Ignition finished successfully Jul 12 00:10:36.419823 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:10:36.438741 systemd-networkd[778]: lo: Link UP Jul 12 00:10:36.438754 systemd-networkd[778]: lo: Gained carrier Jul 12 00:10:36.440424 systemd-networkd[778]: Enumeration completed Jul 12 00:10:36.441043 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:10:36.441816 systemd[1]: Reached target network.target - Network. Jul 12 00:10:36.441850 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:36.441857 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:10:36.445672 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:36.445679 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:10:36.446858 systemd-networkd[778]: eth0: Link UP Jul 12 00:10:36.446862 systemd-networkd[778]: eth0: Gained carrier Jul 12 00:10:36.446872 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:36.448781 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 12 00:10:36.452977 systemd-networkd[778]: eth1: Link UP Jul 12 00:10:36.452980 systemd-networkd[778]: eth1: Gained carrier Jul 12 00:10:36.452991 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:36.465790 ignition[781]: Ignition 2.19.0 Jul 12 00:10:36.465807 ignition[781]: Stage: fetch Jul 12 00:10:36.466036 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:36.466047 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:10:36.466152 ignition[781]: parsed url from cmdline: "" Jul 12 00:10:36.466155 ignition[781]: no config URL provided Jul 12 00:10:36.466160 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:10:36.466171 ignition[781]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:10:36.466191 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jul 12 00:10:36.467147 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jul 12 00:10:36.487587 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:10:36.519619 systemd-networkd[778]: eth0: DHCPv4 address 91.99.93.35/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 12 00:10:36.667438 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jul 12 00:10:36.673789 ignition[781]: GET result: OK Jul 12 00:10:36.673894 ignition[781]: parsing config with SHA512: c190b43fe3910ccd0a9ec52eeb559804102b14a1511a46a1324918a6b406d280f63912240636deeff44d05abc05bfefde5b598cbfd0ae7e6e2969a2d2c6837c8 Jul 12 00:10:36.681236 unknown[781]: fetched base config from "system" Jul 12 00:10:36.681257 unknown[781]: fetched base config from "system" Jul 12 00:10:36.682190 ignition[781]: fetch: fetch complete Jul 12 00:10:36.681275 unknown[781]: fetched user config from "hetzner" Jul 12 00:10:36.682201 ignition[781]: fetch: fetch passed Jul 12 00:10:36.682290 ignition[781]: Ignition finished successfully Jul 12 00:10:36.686705 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 12 00:10:36.693806 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:10:36.709354 ignition[788]: Ignition 2.19.0 Jul 12 00:10:36.709366 ignition[788]: Stage: kargs Jul 12 00:10:36.709621 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:36.709633 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:10:36.710753 ignition[788]: kargs: kargs passed Jul 12 00:10:36.710819 ignition[788]: Ignition finished successfully Jul 12 00:10:36.714543 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:10:36.723131 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:10:36.742940 ignition[794]: Ignition 2.19.0 Jul 12 00:10:36.742952 ignition[794]: Stage: disks Jul 12 00:10:36.743135 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:36.743146 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:10:36.745852 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:10:36.744179 ignition[794]: disks: disks passed Jul 12 00:10:36.744237 ignition[794]: Ignition finished successfully Jul 12 00:10:36.747413 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:10:36.748848 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:10:36.750429 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:10:36.752366 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:10:36.754071 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:10:36.760699 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:10:36.780226 systemd-fsck[802]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 12 00:10:36.784134 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:10:36.788747 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:10:36.853517 kernel: EXT4-fs (sda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:10:36.854195 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:10:36.857060 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:10:36.865660 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:10:36.868649 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:10:36.873717 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 12 00:10:36.874605 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:10:36.874641 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:10:36.887547 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (810) Jul 12 00:10:36.889917 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:10:36.889981 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:10:36.891175 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:10:36.893322 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:10:36.898632 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:10:36.898744 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:10:36.902693 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:10:36.905846 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:10:36.951551 coreos-metadata[812]: Jul 12 00:10:36.951 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jul 12 00:10:36.954202 coreos-metadata[812]: Jul 12 00:10:36.953 INFO Fetch successful Jul 12 00:10:36.957755 coreos-metadata[812]: Jul 12 00:10:36.956 INFO wrote hostname ci-4081-3-4-n-51c90d58be to /sysroot/etc/hostname Jul 12 00:10:36.961435 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:10:36.965342 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:10:36.969191 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:10:36.976031 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:10:36.981409 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:10:37.100735 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:10:37.108703 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:10:37.114805 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:10:37.123515 kernel: BTRFS info (device sda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:10:37.155902 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:10:37.160932 ignition[928]: INFO : Ignition 2.19.0 Jul 12 00:10:37.160932 ignition[928]: INFO : Stage: mount Jul 12 00:10:37.163679 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:37.163679 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:10:37.163679 ignition[928]: INFO : mount: mount passed Jul 12 00:10:37.163679 ignition[928]: INFO : Ignition finished successfully Jul 12 00:10:37.164230 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:10:37.170739 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:10:37.231654 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:10:37.239921 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:10:37.253761 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Jul 12 00:10:37.253832 kernel: BTRFS info (device sda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:10:37.254865 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:10:37.254915 kernel: BTRFS info (device sda6): using free space tree Jul 12 00:10:37.258676 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 12 00:10:37.258735 kernel: BTRFS info (device sda6): auto enabling async discard Jul 12 00:10:37.262756 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:10:37.284669 ignition[957]: INFO : Ignition 2.19.0 Jul 12 00:10:37.285625 ignition[957]: INFO : Stage: files Jul 12 00:10:37.286145 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:37.286145 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:10:37.288351 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:10:37.289808 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:10:37.289808 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:10:37.293355 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:10:37.294120 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:10:37.295129 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:10:37.294167 unknown[957]: wrote ssh authorized keys file for user: core Jul 12 00:10:37.296725 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:10:37.296725 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 12 00:10:37.396897 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:10:37.502765 systemd-networkd[778]: eth1: Gained IPv6LL Jul 12 00:10:37.558141 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:10:37.558141 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:10:37.560939 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 12 00:10:37.631242 systemd-networkd[778]: eth0: Gained IPv6LL Jul 12 00:10:38.189007 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:10:38.266565 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:10:38.266565 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:10:38.270046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 00:10:38.613647 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:10:42.837898 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:10:42.837898 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 12 00:10:42.842491 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:10:42.842491 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:10:42.842491 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 12 00:10:42.842491 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 12 00:10:42.842491 ignition[957]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 12 00:10:42.842491 ignition[957]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jul 12 00:10:42.842491 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 12 00:10:42.842491 ignition[957]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:10:42.842491 ignition[957]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:10:42.842491 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:10:42.842491 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:10:42.842491 ignition[957]: INFO : files: files passed Jul 12 00:10:42.842491 ignition[957]: INFO : Ignition finished successfully Jul 12 00:10:42.842095 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:10:42.849751 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:10:42.855733 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:10:42.863693 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:10:42.865239 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:10:42.878078 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:10:42.878078 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:10:42.881097 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:10:42.883716 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:10:42.885095 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:10:42.889715 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:10:42.938672 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:10:42.938942 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:10:42.940880 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:10:42.942231 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:10:42.943403 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:10:42.950803 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:10:42.967242 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:10:42.973748 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:10:42.990575 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:10:42.992037 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:10:42.993059 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:10:42.994285 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:10:42.994535 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:10:42.995993 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:10:42.997258 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:10:42.998181 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:10:42.999210 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:10:43.000288 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:10:43.001362 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:10:43.002447 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:10:43.003582 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:10:43.004723 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:10:43.005700 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:10:43.006542 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:10:43.006733 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:10:43.007971 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:10:43.009130 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:10:43.010203 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:10:43.011298 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:10:43.012139 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:10:43.012317 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:10:43.013807 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:10:43.014000 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:10:43.015098 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:10:43.015265 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:10:43.016175 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 12 00:10:43.016328 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 12 00:10:43.028546 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:10:43.031818 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:10:43.032400 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:10:43.032742 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:10:43.035938 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:10:43.036120 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:10:43.042086 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:10:43.042594 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:10:43.050023 ignition[1009]: INFO : Ignition 2.19.0 Jul 12 00:10:43.052357 ignition[1009]: INFO : Stage: umount Jul 12 00:10:43.052357 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:10:43.052357 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jul 12 00:10:43.052357 ignition[1009]: INFO : umount: umount passed Jul 12 00:10:43.052357 ignition[1009]: INFO : Ignition finished successfully Jul 12 00:10:43.055570 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:10:43.056610 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:10:43.058313 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:10:43.058422 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:10:43.061691 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:10:43.061763 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:10:43.062331 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 12 00:10:43.062370 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 12 00:10:43.064123 systemd[1]: Stopped target network.target - Network. Jul 12 00:10:43.064952 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:10:43.065015 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:10:43.066653 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:10:43.067132 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:10:43.071570 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:10:43.072534 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:10:43.073856 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:10:43.074931 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:10:43.074983 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:10:43.076017 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:10:43.076062 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:10:43.077035 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:10:43.077086 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:10:43.077971 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:10:43.078010 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:10:43.079109 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:10:43.080041 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:10:43.082136 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:10:43.082719 systemd-networkd[778]: eth1: DHCPv6 lease lost Jul 12 00:10:43.082768 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:10:43.082862 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:10:43.083988 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:10:43.084090 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:10:43.085651 systemd-networkd[778]: eth0: DHCPv6 lease lost Jul 12 00:10:43.091960 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:10:43.092108 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:10:43.094297 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:10:43.094595 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:10:43.096030 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:10:43.096064 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:10:43.102741 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:10:43.103248 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:10:43.103315 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:10:43.104963 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:10:43.105014 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:10:43.106383 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:10:43.106445 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:10:43.107088 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:10:43.107126 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:10:43.107993 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:10:43.116968 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:10:43.117139 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:10:43.120334 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:10:43.120413 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:10:43.122131 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:10:43.122171 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:10:43.123859 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:10:43.123920 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:10:43.125539 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:10:43.125588 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:10:43.127323 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:10:43.127371 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:10:43.133773 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:10:43.134374 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:10:43.134457 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:10:43.137277 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 00:10:43.137336 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:10:43.138811 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:10:43.138862 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:10:43.139541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:10:43.139581 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:10:43.140670 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:10:43.140784 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:10:43.148087 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:10:43.148217 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:10:43.149828 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:10:43.155704 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:10:43.163853 systemd[1]: Switching root. Jul 12 00:10:43.200579 systemd-journald[236]: Journal stopped Jul 12 00:10:44.183707 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jul 12 00:10:44.183786 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:10:44.183799 kernel: SELinux: policy capability open_perms=1 Jul 12 00:10:44.183808 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:10:44.183818 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:10:44.183827 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:10:44.183841 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:10:44.183850 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:10:44.183859 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:10:44.183869 kernel: audit: type=1403 audit(1752279043.396:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:10:44.183879 systemd[1]: Successfully loaded SELinux policy in 38.810ms. Jul 12 00:10:44.183902 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.541ms. Jul 12 00:10:44.183914 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:10:44.183924 systemd[1]: Detected virtualization kvm. Jul 12 00:10:44.183936 systemd[1]: Detected architecture arm64. Jul 12 00:10:44.183946 systemd[1]: Detected first boot. Jul 12 00:10:44.183956 systemd[1]: Hostname set to . Jul 12 00:10:44.183967 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:10:44.183977 zram_generator::config[1052]: No configuration found. Jul 12 00:10:44.183989 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:10:44.184004 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:10:44.184015 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:10:44.184028 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:10:44.184038 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:10:44.184049 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:10:44.184059 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:10:44.184073 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:10:44.184083 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:10:44.184094 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:10:44.184104 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:10:44.184115 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:10:44.184127 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:10:44.184137 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:10:44.184148 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:10:44.184158 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:10:44.184168 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:10:44.184179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:10:44.184189 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:10:44.184200 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:10:44.184211 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:10:44.184223 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:10:44.184233 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:10:44.184243 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:10:44.184253 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:10:44.184268 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:10:44.184278 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:10:44.184290 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:10:44.184301 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:10:44.184311 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:10:44.184322 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:10:44.184333 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:10:44.184344 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:10:44.184354 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:10:44.184364 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:10:44.184375 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:10:44.184387 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:10:44.184397 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:10:44.184408 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:10:44.184418 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:10:44.184473 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:10:44.184502 systemd[1]: Reached target machines.target - Containers. Jul 12 00:10:44.184514 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:10:44.184525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:10:44.184540 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:10:44.184551 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:10:44.184562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:10:44.184572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:10:44.184583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:10:44.184593 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:10:44.184606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:10:44.184618 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:10:44.184629 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:10:44.184639 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:10:44.184649 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:10:44.184659 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:10:44.184672 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:10:44.184685 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:10:44.184698 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:10:44.184713 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:10:44.184725 kernel: loop: module loaded Jul 12 00:10:44.184737 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:10:44.184750 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:10:44.184763 systemd[1]: Stopped verity-setup.service. Jul 12 00:10:44.184774 kernel: fuse: init (API version 7.39) Jul 12 00:10:44.184787 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:10:44.184799 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:10:44.184814 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:10:44.184828 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:10:44.184840 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:10:44.184852 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:10:44.184865 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:10:44.184877 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:10:44.184891 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:10:44.184942 systemd-journald[1126]: Collecting audit messages is disabled. Jul 12 00:10:44.184970 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:10:44.184982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:10:44.184994 systemd-journald[1126]: Journal started Jul 12 00:10:44.185016 systemd-journald[1126]: Runtime Journal (/run/log/journal/d508e184b3c54e9a901df22a6f81d6d6) is 8.0M, max 76.6M, 68.6M free. Jul 12 00:10:44.194541 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:10:44.194609 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:10:44.194631 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:10:44.194645 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:10:43.931234 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:10:43.950275 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 12 00:10:43.950945 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:10:44.202028 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:10:44.202102 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:10:44.202124 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:10:44.204437 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:10:44.225513 kernel: ACPI: bus type drm_connector registered Jul 12 00:10:44.220825 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:10:44.223001 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:10:44.225645 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:10:44.228698 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:10:44.234717 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:10:44.236717 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:10:44.238776 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:10:44.240985 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:10:44.241151 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:10:44.242981 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:10:44.245603 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:10:44.253645 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:10:44.261975 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:10:44.262845 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:10:44.262875 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:10:44.264702 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:10:44.274764 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:10:44.278769 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:10:44.281278 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:10:44.283172 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:10:44.288737 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:10:44.290593 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:10:44.291892 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:10:44.296778 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:10:44.299407 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:10:44.312155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:10:44.330757 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jul 12 00:10:44.330775 systemd-tmpfiles[1157]: ACLs are not supported, ignoring. Jul 12 00:10:44.336199 systemd-journald[1126]: Time spent on flushing to /var/log/journal/d508e184b3c54e9a901df22a6f81d6d6 is 70.417ms for 1133 entries. Jul 12 00:10:44.336199 systemd-journald[1126]: System Journal (/var/log/journal/d508e184b3c54e9a901df22a6f81d6d6) is 8.0M, max 584.8M, 576.8M free. Jul 12 00:10:44.421729 systemd-journald[1126]: Received client request to flush runtime journal. Jul 12 00:10:44.421785 kernel: loop0: detected capacity change from 0 to 114432 Jul 12 00:10:44.423451 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:10:44.423506 kernel: loop1: detected capacity change from 0 to 114328 Jul 12 00:10:44.344963 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:10:44.345974 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:10:44.354768 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:10:44.357601 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:10:44.364728 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:10:44.426227 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:10:44.427079 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:10:44.431253 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:10:44.433948 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:10:44.445530 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:10:44.454129 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:10:44.462678 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:10:44.471529 kernel: loop2: detected capacity change from 0 to 207008 Jul 12 00:10:44.489716 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 12 00:10:44.497469 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jul 12 00:10:44.497515 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Jul 12 00:10:44.506680 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:10:44.523672 kernel: loop3: detected capacity change from 0 to 8 Jul 12 00:10:44.549823 kernel: loop4: detected capacity change from 0 to 114432 Jul 12 00:10:44.570531 kernel: loop5: detected capacity change from 0 to 114328 Jul 12 00:10:44.594674 kernel: loop6: detected capacity change from 0 to 207008 Jul 12 00:10:44.614541 kernel: loop7: detected capacity change from 0 to 8 Jul 12 00:10:44.614967 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jul 12 00:10:44.615446 (sd-merge)[1194]: Merged extensions into '/usr'. Jul 12 00:10:44.620710 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:10:44.620883 systemd[1]: Reloading... Jul 12 00:10:44.714518 zram_generator::config[1218]: No configuration found. Jul 12 00:10:44.866677 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:10:44.926501 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:10:44.940560 systemd[1]: Reloading finished in 319 ms. Jul 12 00:10:44.985593 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:10:44.987051 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:10:44.998739 systemd[1]: Starting ensure-sysext.service... Jul 12 00:10:45.003721 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:10:45.016186 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:10:45.016207 systemd[1]: Reloading... Jul 12 00:10:45.065776 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:10:45.066031 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:10:45.066845 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:10:45.067077 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jul 12 00:10:45.067192 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Jul 12 00:10:45.073353 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:10:45.073599 systemd-tmpfiles[1259]: Skipping /boot Jul 12 00:10:45.091020 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:10:45.091564 systemd-tmpfiles[1259]: Skipping /boot Jul 12 00:10:45.133509 zram_generator::config[1294]: No configuration found. Jul 12 00:10:45.228181 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:10:45.276535 systemd[1]: Reloading finished in 259 ms. Jul 12 00:10:45.293898 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:10:45.300338 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:10:45.310743 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:10:45.317712 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:10:45.322991 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:10:45.329349 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:10:45.335962 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:10:45.343817 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:10:45.351260 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:10:45.355358 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:10:45.365791 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:10:45.370829 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:10:45.371754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:10:45.376577 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:10:45.381949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:10:45.382105 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:10:45.385525 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:10:45.389620 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:10:45.390590 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:10:45.398925 systemd[1]: Finished ensure-sysext.service. Jul 12 00:10:45.403917 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:10:45.408092 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:10:45.409035 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jul 12 00:10:45.424983 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:10:45.430864 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:10:45.431950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:10:45.433335 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:10:45.434435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:10:45.434613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:10:45.436776 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:10:45.436981 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:10:45.440039 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:10:45.441297 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:10:45.448327 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:10:45.462884 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:10:45.464204 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:10:45.464296 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:10:45.497641 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:10:45.508153 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:10:45.511325 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:10:45.511902 augenrules[1373]: No rules Jul 12 00:10:45.514623 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:10:45.516396 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:10:45.577211 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:10:45.637008 systemd-networkd[1360]: lo: Link UP Jul 12 00:10:45.637019 systemd-networkd[1360]: lo: Gained carrier Jul 12 00:10:45.642411 systemd-networkd[1360]: Enumeration completed Jul 12 00:10:45.644693 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:10:45.646818 systemd-networkd[1360]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:45.646826 systemd-networkd[1360]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:10:45.652682 systemd-networkd[1360]: eth0: Link UP Jul 12 00:10:45.652979 systemd-networkd[1360]: eth0: Gained carrier Jul 12 00:10:45.653010 systemd-networkd[1360]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:45.657741 systemd-resolved[1328]: Positive Trust Anchors: Jul 12 00:10:45.657763 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:10:45.657796 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:10:45.658312 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:10:45.665222 systemd-resolved[1328]: Using system hostname 'ci-4081-3-4-n-51c90d58be'. Jul 12 00:10:45.667403 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:10:45.668666 systemd[1]: Reached target network.target - Network. Jul 12 00:10:45.669323 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:10:45.697163 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:10:45.698063 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:10:45.706179 systemd-networkd[1360]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:45.715767 systemd-networkd[1360]: eth0: DHCPv4 address 91.99.93.35/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jul 12 00:10:45.716742 systemd-timesyncd[1348]: Network configuration changed, trying to establish connection. Jul 12 00:10:45.746516 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1362) Jul 12 00:10:45.775503 kernel: mousedev: PS/2 mouse device common for all mice Jul 12 00:10:45.780226 systemd-networkd[1360]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:45.780237 systemd-networkd[1360]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:10:45.782717 systemd-networkd[1360]: eth1: Link UP Jul 12 00:10:45.782840 systemd-networkd[1360]: eth1: Gained carrier Jul 12 00:10:45.782904 systemd-networkd[1360]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:10:45.819646 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jul 12 00:10:45.819773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:10:45.829768 systemd-networkd[1360]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:10:45.830761 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:10:45.835006 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:10:45.838321 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:10:45.839067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:10:45.839108 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:10:45.839623 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:10:45.839806 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:10:45.847267 systemd-timesyncd[1348]: Contacted time server 144.76.139.8:123 (0.flatcar.pool.ntp.org). Jul 12 00:10:45.847320 systemd-timesyncd[1348]: Initial clock synchronization to Sat 2025-07-12 00:10:45.845655 UTC. Jul 12 00:10:45.862528 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:10:45.862742 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:10:45.863736 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:10:45.874673 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:10:45.874864 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:10:45.876725 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:10:45.902558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jul 12 00:10:45.913868 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:10:45.918234 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jul 12 00:10:45.918333 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jul 12 00:10:45.918373 kernel: [drm] features: -context_init Jul 12 00:10:45.918385 kernel: [drm] number of scanouts: 1 Jul 12 00:10:45.918398 kernel: [drm] number of cap sets: 0 Jul 12 00:10:45.921614 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jul 12 00:10:45.921799 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:10:45.937561 kernel: Console: switching to colour frame buffer device 160x50 Jul 12 00:10:45.940979 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:10:45.948194 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jul 12 00:10:45.955745 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:10:45.955946 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:10:45.962709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:10:46.047153 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:10:46.065773 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:10:46.072759 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:10:46.095828 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:10:46.127612 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:10:46.128868 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:10:46.129808 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:10:46.130810 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:10:46.131665 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:10:46.132754 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:10:46.133457 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:10:46.134200 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:10:46.136387 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:10:46.136515 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:10:46.137644 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:10:46.140648 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:10:46.142866 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:10:46.148970 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:10:46.151638 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:10:46.153157 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:10:46.154050 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:10:46.154752 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:10:46.155309 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:10:46.155339 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:10:46.157674 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:10:46.163690 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:10:46.165679 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 12 00:10:46.169767 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:10:46.178876 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:10:46.184157 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:10:46.184984 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:10:46.187723 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:10:46.192683 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:10:46.198705 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jul 12 00:10:46.210504 jq[1448]: false Jul 12 00:10:46.215734 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:10:46.222735 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:10:46.231752 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:10:46.232828 dbus-daemon[1447]: [system] SELinux support is enabled Jul 12 00:10:46.233200 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:10:46.234966 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:10:46.238417 coreos-metadata[1446]: Jul 12 00:10:46.238 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jul 12 00:10:46.238970 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:10:46.242659 coreos-metadata[1446]: Jul 12 00:10:46.242 INFO Fetch successful Jul 12 00:10:46.242659 coreos-metadata[1446]: Jul 12 00:10:46.242 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jul 12 00:10:46.242672 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:10:46.247518 coreos-metadata[1446]: Jul 12 00:10:46.242 INFO Fetch successful Jul 12 00:10:46.246140 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:10:46.252065 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:10:46.257916 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:10:46.258119 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:10:46.265218 jq[1460]: true Jul 12 00:10:46.271318 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:10:46.271382 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:10:46.274241 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:10:46.274264 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:10:46.275650 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:10:46.275860 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:10:46.305776 extend-filesystems[1449]: Found loop4 Jul 12 00:10:46.305776 extend-filesystems[1449]: Found loop5 Jul 12 00:10:46.305776 extend-filesystems[1449]: Found loop6 Jul 12 00:10:46.305776 extend-filesystems[1449]: Found loop7 Jul 12 00:10:46.305776 extend-filesystems[1449]: Found sda Jul 12 00:10:46.305776 extend-filesystems[1449]: Found sda1 Jul 12 00:10:46.305776 extend-filesystems[1449]: Found sda2 Jul 12 00:10:46.305776 extend-filesystems[1449]: Found sda3 Jul 12 00:10:46.305776 extend-filesystems[1449]: Found usr Jul 12 00:10:46.305776 extend-filesystems[1449]: Found sda4 Jul 12 00:10:46.305776 extend-filesystems[1449]: Found sda6 Jul 12 00:10:46.305776 extend-filesystems[1449]: Found sda7 Jul 12 00:10:46.305776 extend-filesystems[1449]: Found sda9 Jul 12 00:10:46.305776 extend-filesystems[1449]: Checking size of /dev/sda9 Jul 12 00:10:46.354185 tar[1470]: linux-arm64/LICENSE Jul 12 00:10:46.354185 tar[1470]: linux-arm64/helm Jul 12 00:10:46.333592 (ntainerd)[1487]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:10:46.351791 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:10:46.351982 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:10:46.354951 jq[1474]: true Jul 12 00:10:46.377121 extend-filesystems[1449]: Resized partition /dev/sda9 Jul 12 00:10:46.382228 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:10:46.385581 update_engine[1459]: I20250712 00:10:46.381554 1459 main.cc:92] Flatcar Update Engine starting Jul 12 00:10:46.393623 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jul 12 00:10:46.393793 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 12 00:10:46.394118 update_engine[1459]: I20250712 00:10:46.393851 1459 update_check_scheduler.cc:74] Next update check in 11m10s Jul 12 00:10:46.396694 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:10:46.397757 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:10:46.428715 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:10:46.483886 systemd-logind[1458]: New seat seat0. Jul 12 00:10:46.495822 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:10:46.495851 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jul 12 00:10:46.496416 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:10:46.536528 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1389) Jul 12 00:10:46.555342 bash[1519]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:10:46.549351 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:10:46.567145 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jul 12 00:10:46.566107 systemd[1]: Starting sshkeys.service... Jul 12 00:10:46.592875 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 12 00:10:46.594598 extend-filesystems[1499]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 12 00:10:46.594598 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 5 Jul 12 00:10:46.594598 extend-filesystems[1499]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jul 12 00:10:46.614121 extend-filesystems[1449]: Resized filesystem in /dev/sda9 Jul 12 00:10:46.614121 extend-filesystems[1449]: Found sr0 Jul 12 00:10:46.604838 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 12 00:10:46.606751 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:10:46.607291 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:10:46.666741 coreos-metadata[1529]: Jul 12 00:10:46.666 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jul 12 00:10:46.668116 coreos-metadata[1529]: Jul 12 00:10:46.667 INFO Fetch successful Jul 12 00:10:46.669930 locksmithd[1501]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:10:46.671662 unknown[1529]: wrote ssh authorized keys file for user: core Jul 12 00:10:46.673407 containerd[1487]: time="2025-07-12T00:10:46.673278064Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:10:46.701048 update-ssh-keys[1535]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:10:46.702606 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 12 00:10:46.711584 systemd[1]: Finished sshkeys.service. Jul 12 00:10:46.715470 containerd[1487]: time="2025-07-12T00:10:46.715413100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:46.717291 containerd[1487]: time="2025-07-12T00:10:46.717244018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:10:46.717512 containerd[1487]: time="2025-07-12T00:10:46.717467718Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:10:46.717655 containerd[1487]: time="2025-07-12T00:10:46.717633944Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:10:46.717985 containerd[1487]: time="2025-07-12T00:10:46.717964235Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:10:46.718125 containerd[1487]: time="2025-07-12T00:10:46.718055547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:46.718318 containerd[1487]: time="2025-07-12T00:10:46.718297205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:10:46.718609 containerd[1487]: time="2025-07-12T00:10:46.718444752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:46.718867 containerd[1487]: time="2025-07-12T00:10:46.718842557Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:10:46.718979 containerd[1487]: time="2025-07-12T00:10:46.718915591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:46.719149 containerd[1487]: time="2025-07-12T00:10:46.719026221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:10:46.719149 containerd[1487]: time="2025-07-12T00:10:46.719042619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:46.719337 containerd[1487]: time="2025-07-12T00:10:46.719266000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:46.720292 containerd[1487]: time="2025-07-12T00:10:46.719842509Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:10:46.720292 containerd[1487]: time="2025-07-12T00:10:46.719995215Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:10:46.720292 containerd[1487]: time="2025-07-12T00:10:46.720009814Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:10:46.720292 containerd[1487]: time="2025-07-12T00:10:46.720091967Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:10:46.720292 containerd[1487]: time="2025-07-12T00:10:46.720135803Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:10:46.720677 systemd-networkd[1360]: eth0: Gained IPv6LL Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.729795869Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.729888581Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.729916698Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.729947136Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.729967534Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.730193034Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.730587799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.730770623Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.730794861Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.730818339Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.730838777Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.730857375Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.730881373Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:10:46.735721 containerd[1487]: time="2025-07-12T00:10:46.730903451Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:10:46.731610 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.730924449Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.730943128Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.730960566Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.730978205Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.731005882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.731026560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.731049638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.731069796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.731090195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.731110313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.731131031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.731151909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.731170388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736148 containerd[1487]: time="2025-07-12T00:10:46.731191026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.733441 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:10:46.736524 containerd[1487]: time="2025-07-12T00:10:46.731209024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736524 containerd[1487]: time="2025-07-12T00:10:46.731229702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736524 containerd[1487]: time="2025-07-12T00:10:46.731251100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736524 containerd[1487]: time="2025-07-12T00:10:46.731275338Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:10:46.736524 containerd[1487]: time="2025-07-12T00:10:46.731304416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736524 containerd[1487]: time="2025-07-12T00:10:46.731328574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.736524 containerd[1487]: time="2025-07-12T00:10:46.731343892Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:10:46.739002 containerd[1487]: time="2025-07-12T00:10:46.738947140Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:10:46.739206 containerd[1487]: time="2025-07-12T00:10:46.739147003Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:10:46.739298 containerd[1487]: time="2025-07-12T00:10:46.739280911Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:10:46.739373 containerd[1487]: time="2025-07-12T00:10:46.739354704Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:10:46.739424 containerd[1487]: time="2025-07-12T00:10:46.739411739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.739500 containerd[1487]: time="2025-07-12T00:10:46.739471334Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:10:46.739553 containerd[1487]: time="2025-07-12T00:10:46.739541048Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:10:46.739608 containerd[1487]: time="2025-07-12T00:10:46.739594603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:10:46.740373 containerd[1487]: time="2025-07-12T00:10:46.740284342Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:10:46.740802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:46.743129 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:10:46.745794 containerd[1487]: time="2025-07-12T00:10:46.744569443Z" level=info msg="Connect containerd service" Jul 12 00:10:46.745794 containerd[1487]: time="2025-07-12T00:10:46.744655756Z" level=info msg="using legacy CRI server" Jul 12 00:10:46.745794 containerd[1487]: time="2025-07-12T00:10:46.744665635Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:10:46.745794 containerd[1487]: time="2025-07-12T00:10:46.744771146Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:10:46.747918 containerd[1487]: time="2025-07-12T00:10:46.746457197Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:10:46.747918 containerd[1487]: time="2025-07-12T00:10:46.746714534Z" level=info msg="Start subscribing containerd event" Jul 12 00:10:46.747918 containerd[1487]: time="2025-07-12T00:10:46.746794207Z" level=info msg="Start recovering state" Jul 12 00:10:46.747918 containerd[1487]: time="2025-07-12T00:10:46.746879919Z" level=info msg="Start event monitor" Jul 12 00:10:46.747918 containerd[1487]: time="2025-07-12T00:10:46.746892318Z" level=info msg="Start snapshots syncer" Jul 12 00:10:46.747918 containerd[1487]: time="2025-07-12T00:10:46.746914596Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:10:46.747918 containerd[1487]: time="2025-07-12T00:10:46.746927475Z" level=info msg="Start streaming server" Jul 12 00:10:46.747918 containerd[1487]: time="2025-07-12T00:10:46.747048464Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:10:46.747918 containerd[1487]: time="2025-07-12T00:10:46.747091901Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:10:46.747266 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:10:46.759181 containerd[1487]: time="2025-07-12T00:10:46.759117398Z" level=info msg="containerd successfully booted in 0.086930s" Jul 12 00:10:46.813102 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:10:47.102620 systemd-networkd[1360]: eth1: Gained IPv6LL Jul 12 00:10:47.472869 tar[1470]: linux-arm64/README.md Jul 12 00:10:47.500706 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:10:47.703906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:47.723457 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:10:48.272022 kubelet[1559]: E0712 00:10:48.271975 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:10:48.274928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:10:48.275068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:10:48.364185 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:10:48.389612 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:10:48.398010 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:10:48.408255 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:10:48.409598 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:10:48.420304 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:10:48.430303 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:10:48.437894 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:10:48.440761 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:10:48.441782 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:10:48.442525 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:10:48.446623 systemd[1]: Startup finished in 800ms (kernel) + 9.694s (initrd) + 5.088s (userspace) = 15.583s. Jul 12 00:10:58.526385 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:10:58.535907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:10:58.664109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:10:58.670465 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:10:58.720913 kubelet[1595]: E0712 00:10:58.720857 1595 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:10:58.725440 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:10:58.725770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:11:08.976267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 00:11:08.983963 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:11:09.126750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:11:09.129296 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:11:09.173320 kubelet[1610]: E0712 00:11:09.173266 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:11:09.175574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:11:09.175833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:11:19.426751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 12 00:11:19.432824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:11:19.562881 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:11:19.563543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:11:19.612031 kubelet[1626]: E0712 00:11:19.611946 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:11:19.615736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:11:19.616104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:11:28.434980 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:11:28.447127 systemd[1]: Started sshd@0-91.99.93.35:22-139.178.68.195:59712.service - OpenSSH per-connection server daemon (139.178.68.195:59712). Jul 12 00:11:29.447436 sshd[1635]: Accepted publickey for core from 139.178.68.195 port 59712 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:29.451368 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:29.464123 systemd-logind[1458]: New session 1 of user core. Jul 12 00:11:29.464596 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:11:29.475076 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:11:29.489946 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:11:29.500031 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:11:29.503862 (systemd)[1639]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:11:29.621129 systemd[1639]: Queued start job for default target default.target. Jul 12 00:11:29.632379 systemd[1639]: Created slice app.slice - User Application Slice. Jul 12 00:11:29.632942 systemd[1639]: Reached target paths.target - Paths. Jul 12 00:11:29.633361 systemd[1639]: Reached target timers.target - Timers. Jul 12 00:11:29.635043 systemd[1639]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:11:29.641709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 12 00:11:29.644888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:11:29.657999 systemd[1639]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:11:29.658320 systemd[1639]: Reached target sockets.target - Sockets. Jul 12 00:11:29.658366 systemd[1639]: Reached target basic.target - Basic System. Jul 12 00:11:29.658462 systemd[1639]: Reached target default.target - Main User Target. Jul 12 00:11:29.658602 systemd[1639]: Startup finished in 144ms. Jul 12 00:11:29.659135 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:11:29.662537 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:11:29.791821 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:11:29.812403 (kubelet)[1656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:11:29.855913 kubelet[1656]: E0712 00:11:29.855831 1656 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:11:29.859369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:11:29.859739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:11:30.369824 systemd[1]: Started sshd@1-91.99.93.35:22-139.178.68.195:59722.service - OpenSSH per-connection server daemon (139.178.68.195:59722). Jul 12 00:11:31.380050 sshd[1665]: Accepted publickey for core from 139.178.68.195 port 59722 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:31.382074 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:31.388377 systemd-logind[1458]: New session 2 of user core. Jul 12 00:11:31.400840 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:11:31.865025 update_engine[1459]: I20250712 00:11:31.864813 1459 update_attempter.cc:509] Updating boot flags... Jul 12 00:11:31.928555 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1678) Jul 12 00:11:32.004562 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1680) Jul 12 00:11:32.047769 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1680) Jul 12 00:11:32.077822 sshd[1665]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:32.082317 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:11:32.083068 systemd[1]: sshd@1-91.99.93.35:22-139.178.68.195:59722.service: Deactivated successfully. Jul 12 00:11:32.085031 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:11:32.086637 systemd-logind[1458]: Removed session 2. Jul 12 00:11:32.273950 systemd[1]: Started sshd@2-91.99.93.35:22-139.178.68.195:59728.service - OpenSSH per-connection server daemon (139.178.68.195:59728). Jul 12 00:11:33.330745 sshd[1693]: Accepted publickey for core from 139.178.68.195 port 59728 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:33.332499 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:33.337537 systemd-logind[1458]: New session 3 of user core. Jul 12 00:11:33.348812 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:11:34.062140 sshd[1693]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:34.068823 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:11:34.070277 systemd[1]: sshd@2-91.99.93.35:22-139.178.68.195:59728.service: Deactivated successfully. Jul 12 00:11:34.072240 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:11:34.073383 systemd-logind[1458]: Removed session 3. Jul 12 00:11:34.232102 systemd[1]: Started sshd@3-91.99.93.35:22-139.178.68.195:59734.service - OpenSSH per-connection server daemon (139.178.68.195:59734). Jul 12 00:11:35.223173 sshd[1700]: Accepted publickey for core from 139.178.68.195 port 59734 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:35.225465 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:35.230724 systemd-logind[1458]: New session 4 of user core. Jul 12 00:11:35.239859 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:11:35.914648 sshd[1700]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:35.919757 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:11:35.920102 systemd[1]: sshd@3-91.99.93.35:22-139.178.68.195:59734.service: Deactivated successfully. Jul 12 00:11:35.922445 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:11:35.924804 systemd-logind[1458]: Removed session 4. Jul 12 00:11:36.110453 systemd[1]: Started sshd@4-91.99.93.35:22-139.178.68.195:59740.service - OpenSSH per-connection server daemon (139.178.68.195:59740). Jul 12 00:11:37.111137 sshd[1707]: Accepted publickey for core from 139.178.68.195 port 59740 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:37.113256 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:37.118562 systemd-logind[1458]: New session 5 of user core. Jul 12 00:11:37.128878 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:11:37.653574 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:11:37.653879 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:11:37.675898 sudo[1710]: pam_unix(sudo:session): session closed for user root Jul 12 00:11:37.838708 sshd[1707]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:37.845237 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:11:37.846913 systemd[1]: sshd@4-91.99.93.35:22-139.178.68.195:59740.service: Deactivated successfully. Jul 12 00:11:37.849205 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:11:37.850642 systemd-logind[1458]: Removed session 5. Jul 12 00:11:38.017038 systemd[1]: Started sshd@5-91.99.93.35:22-139.178.68.195:59748.service - OpenSSH per-connection server daemon (139.178.68.195:59748). Jul 12 00:11:38.994670 sshd[1715]: Accepted publickey for core from 139.178.68.195 port 59748 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:38.997357 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:39.004326 systemd-logind[1458]: New session 6 of user core. Jul 12 00:11:39.009872 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:11:39.517748 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:11:39.518088 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:11:39.522374 sudo[1719]: pam_unix(sudo:session): session closed for user root Jul 12 00:11:39.529070 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:11:39.529844 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:11:39.545938 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:11:39.548291 auditctl[1722]: No rules Jul 12 00:11:39.549247 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:11:39.549489 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:11:39.552043 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:11:39.593121 augenrules[1740]: No rules Jul 12 00:11:39.594597 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:11:39.597226 sudo[1718]: pam_unix(sudo:session): session closed for user root Jul 12 00:11:39.756839 sshd[1715]: pam_unix(sshd:session): session closed for user core Jul 12 00:11:39.762188 systemd[1]: sshd@5-91.99.93.35:22-139.178.68.195:59748.service: Deactivated successfully. Jul 12 00:11:39.764421 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:11:39.767726 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:11:39.771262 systemd-logind[1458]: Removed session 6. Jul 12 00:11:39.931056 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 12 00:11:39.939866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:11:39.942870 systemd[1]: Started sshd@6-91.99.93.35:22-139.178.68.195:47958.service - OpenSSH per-connection server daemon (139.178.68.195:47958). Jul 12 00:11:40.083766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:11:40.084810 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:11:40.135922 kubelet[1758]: E0712 00:11:40.135855 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:11:40.139547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:11:40.139838 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:11:40.939278 sshd[1749]: Accepted publickey for core from 139.178.68.195 port 47958 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:11:40.941949 sshd[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:11:40.947420 systemd-logind[1458]: New session 7 of user core. Jul 12 00:11:40.958905 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:11:41.468400 sudo[1766]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:11:41.468712 sudo[1766]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:11:41.778000 (dockerd)[1781]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:11:41.778083 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:11:42.038856 dockerd[1781]: time="2025-07-12T00:11:42.036724604Z" level=info msg="Starting up" Jul 12 00:11:42.137950 dockerd[1781]: time="2025-07-12T00:11:42.137578484Z" level=info msg="Loading containers: start." Jul 12 00:11:42.260561 kernel: Initializing XFRM netlink socket Jul 12 00:11:42.351146 systemd-networkd[1360]: docker0: Link UP Jul 12 00:11:42.372492 dockerd[1781]: time="2025-07-12T00:11:42.372372965Z" level=info msg="Loading containers: done." Jul 12 00:11:42.387893 dockerd[1781]: time="2025-07-12T00:11:42.387637008Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:11:42.387893 dockerd[1781]: time="2025-07-12T00:11:42.387760008Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:11:42.388073 dockerd[1781]: time="2025-07-12T00:11:42.387913088Z" level=info msg="Daemon has completed initialization" Jul 12 00:11:42.431966 dockerd[1781]: time="2025-07-12T00:11:42.431642703Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:11:42.432691 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:11:43.517080 containerd[1487]: time="2025-07-12T00:11:43.516977716Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 12 00:11:44.150003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3456071433.mount: Deactivated successfully. Jul 12 00:11:46.135877 containerd[1487]: time="2025-07-12T00:11:46.135792693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:46.138240 containerd[1487]: time="2025-07-12T00:11:46.138162569Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328286" Jul 12 00:11:46.138617 containerd[1487]: time="2025-07-12T00:11:46.138538768Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:46.142208 containerd[1487]: time="2025-07-12T00:11:46.142141922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:46.143381 containerd[1487]: time="2025-07-12T00:11:46.143328679Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 2.626308923s" Jul 12 00:11:46.143381 containerd[1487]: time="2025-07-12T00:11:46.143372919Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 12 00:11:46.144360 containerd[1487]: time="2025-07-12T00:11:46.144165798Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 12 00:11:48.825565 containerd[1487]: time="2025-07-12T00:11:48.825429644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:48.827983 containerd[1487]: time="2025-07-12T00:11:48.827916760Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529248" Jul 12 00:11:48.828790 containerd[1487]: time="2025-07-12T00:11:48.828672558Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:48.832525 containerd[1487]: time="2025-07-12T00:11:48.832442232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:48.834347 containerd[1487]: time="2025-07-12T00:11:48.834153509Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 2.689946191s" Jul 12 00:11:48.834347 containerd[1487]: time="2025-07-12T00:11:48.834210669Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 12 00:11:48.837854 containerd[1487]: time="2025-07-12T00:11:48.837643304Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 12 00:11:50.311694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 12 00:11:50.323004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:11:50.450663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:11:50.455328 (kubelet)[1989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:11:50.506575 kubelet[1989]: E0712 00:11:50.506128 1989 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:11:50.508660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:11:50.508833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:11:50.667408 containerd[1487]: time="2025-07-12T00:11:50.667237697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:50.669740 containerd[1487]: time="2025-07-12T00:11:50.669630057Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484161" Jul 12 00:11:50.671360 containerd[1487]: time="2025-07-12T00:11:50.671284337Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:50.675631 containerd[1487]: time="2025-07-12T00:11:50.675579457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:50.677366 containerd[1487]: time="2025-07-12T00:11:50.676958297Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.839277153s" Jul 12 00:11:50.677366 containerd[1487]: time="2025-07-12T00:11:50.677007217Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 12 00:11:50.677654 containerd[1487]: time="2025-07-12T00:11:50.677557057Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 00:11:51.763901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284770000.mount: Deactivated successfully. Jul 12 00:11:52.123553 containerd[1487]: time="2025-07-12T00:11:52.123389892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:52.125746 containerd[1487]: time="2025-07-12T00:11:52.125608720Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378432" Jul 12 00:11:52.127801 containerd[1487]: time="2025-07-12T00:11:52.127696707Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:52.131293 containerd[1487]: time="2025-07-12T00:11:52.131185631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:52.132170 containerd[1487]: time="2025-07-12T00:11:52.131992122Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.454394624s" Jul 12 00:11:52.132170 containerd[1487]: time="2025-07-12T00:11:52.132040282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 12 00:11:52.132852 containerd[1487]: time="2025-07-12T00:11:52.132578769Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:11:52.695279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377230074.mount: Deactivated successfully. Jul 12 00:11:53.473539 containerd[1487]: time="2025-07-12T00:11:53.473251609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:53.475516 containerd[1487]: time="2025-07-12T00:11:53.475241154Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Jul 12 00:11:53.477373 containerd[1487]: time="2025-07-12T00:11:53.477313500Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:53.481382 containerd[1487]: time="2025-07-12T00:11:53.481297989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:53.483914 containerd[1487]: time="2025-07-12T00:11:53.483574937Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.350958848s" Jul 12 00:11:53.483914 containerd[1487]: time="2025-07-12T00:11:53.483629978Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:11:53.484559 containerd[1487]: time="2025-07-12T00:11:53.484515389Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:11:53.961465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805217492.mount: Deactivated successfully. Jul 12 00:11:53.968942 containerd[1487]: time="2025-07-12T00:11:53.968840431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:53.969971 containerd[1487]: time="2025-07-12T00:11:53.969918444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Jul 12 00:11:53.970986 containerd[1487]: time="2025-07-12T00:11:53.970919417Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:53.974082 containerd[1487]: time="2025-07-12T00:11:53.974026655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:53.975427 containerd[1487]: time="2025-07-12T00:11:53.975147069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 490.582279ms" Jul 12 00:11:53.975427 containerd[1487]: time="2025-07-12T00:11:53.975190630Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:11:53.975747 containerd[1487]: time="2025-07-12T00:11:53.975657035Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 12 00:11:54.621789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3791534541.mount: Deactivated successfully. Jul 12 00:11:57.443651 containerd[1487]: time="2025-07-12T00:11:57.443560752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:57.446039 containerd[1487]: time="2025-07-12T00:11:57.445968938Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812537" Jul 12 00:11:57.446039 containerd[1487]: time="2025-07-12T00:11:57.445992539Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:57.450529 containerd[1487]: time="2025-07-12T00:11:57.450455348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:11:57.452737 containerd[1487]: time="2025-07-12T00:11:57.452559771Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.476865935s" Jul 12 00:11:57.452737 containerd[1487]: time="2025-07-12T00:11:57.452611612Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 12 00:12:00.562065 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 12 00:12:00.570738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:00.717721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:00.731315 (kubelet)[2143]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:12:00.788245 kubelet[2143]: E0712 00:12:00.788191 2143 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:12:00.792863 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:12:00.793006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:12:03.163070 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:03.177642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:03.209089 systemd[1]: Reloading requested from client PID 2157 ('systemctl') (unit session-7.scope)... Jul 12 00:12:03.209253 systemd[1]: Reloading... Jul 12 00:12:03.325630 zram_generator::config[2200]: No configuration found. Jul 12 00:12:03.427739 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:12:03.500629 systemd[1]: Reloading finished in 290 ms. Jul 12 00:12:03.551215 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:12:03.551316 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:12:03.552559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:03.561761 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:03.688266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:03.706085 (kubelet)[2244]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:12:03.755326 kubelet[2244]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:12:03.755326 kubelet[2244]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:12:03.755326 kubelet[2244]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:12:03.756394 kubelet[2244]: I0712 00:12:03.756194 2244 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:12:05.311515 kubelet[2244]: I0712 00:12:05.310712 2244 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:12:05.311515 kubelet[2244]: I0712 00:12:05.310759 2244 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:12:05.311515 kubelet[2244]: I0712 00:12:05.311212 2244 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:12:05.347773 kubelet[2244]: E0712 00:12:05.347717 2244 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://91.99.93.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.93.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:05.351192 kubelet[2244]: I0712 00:12:05.351088 2244 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:12:05.357405 kubelet[2244]: E0712 00:12:05.357337 2244 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:12:05.357405 kubelet[2244]: I0712 00:12:05.357380 2244 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:12:05.360605 kubelet[2244]: I0712 00:12:05.360548 2244 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:12:05.362692 kubelet[2244]: I0712 00:12:05.362621 2244 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:12:05.362990 kubelet[2244]: I0712 00:12:05.362684 2244 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-n-51c90d58be","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:12:05.363130 kubelet[2244]: I0712 00:12:05.363041 2244 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:12:05.363130 kubelet[2244]: I0712 00:12:05.363053 2244 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:12:05.363325 kubelet[2244]: I0712 00:12:05.363280 2244 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:12:05.366880 kubelet[2244]: I0712 00:12:05.366679 2244 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:12:05.366880 kubelet[2244]: I0712 00:12:05.366716 2244 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:12:05.366880 kubelet[2244]: I0712 00:12:05.366739 2244 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:12:05.366880 kubelet[2244]: I0712 00:12:05.366750 2244 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:12:05.372541 kubelet[2244]: W0712 00:12:05.371687 2244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.93.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-51c90d58be&limit=500&resourceVersion=0": dial tcp 91.99.93.35:6443: connect: connection refused Jul 12 00:12:05.372541 kubelet[2244]: E0712 00:12:05.371755 2244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.93.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-51c90d58be&limit=500&resourceVersion=0\": dial tcp 91.99.93.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:05.372541 kubelet[2244]: W0712 00:12:05.372163 2244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.93.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.93.35:6443: connect: connection refused Jul 12 00:12:05.372541 kubelet[2244]: E0712 00:12:05.372205 2244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.93.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.93.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:05.372923 kubelet[2244]: I0712 00:12:05.372903 2244 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:12:05.373707 kubelet[2244]: I0712 00:12:05.373689 2244 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:12:05.373955 kubelet[2244]: W0712 00:12:05.373940 2244 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:12:05.377263 kubelet[2244]: I0712 00:12:05.377224 2244 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:12:05.377520 kubelet[2244]: I0712 00:12:05.377461 2244 server.go:1287] "Started kubelet" Jul 12 00:12:05.378606 kubelet[2244]: I0712 00:12:05.378262 2244 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:12:05.381076 kubelet[2244]: I0712 00:12:05.381009 2244 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:12:05.381472 kubelet[2244]: I0712 00:12:05.381456 2244 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:12:05.382618 kubelet[2244]: I0712 00:12:05.382585 2244 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:12:05.387511 kubelet[2244]: I0712 00:12:05.386827 2244 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:12:05.388021 kubelet[2244]: E0712 00:12:05.387680 2244 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.93.35:6443/api/v1/namespaces/default/events\": dial tcp 91.99.93.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-n-51c90d58be.185158949f74127e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-n-51c90d58be,UID:ci-4081-3-4-n-51c90d58be,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-n-51c90d58be,},FirstTimestamp:2025-07-12 00:12:05.37742195 +0000 UTC m=+1.666493822,LastTimestamp:2025-07-12 00:12:05.37742195 +0000 UTC m=+1.666493822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-n-51c90d58be,}" Jul 12 00:12:05.390860 kubelet[2244]: I0712 00:12:05.390763 2244 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:12:05.392380 kubelet[2244]: I0712 00:12:05.392336 2244 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:12:05.392698 kubelet[2244]: E0712 00:12:05.392671 2244 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-51c90d58be\" not found" Jul 12 00:12:05.393537 kubelet[2244]: I0712 00:12:05.393393 2244 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:12:05.393537 kubelet[2244]: I0712 00:12:05.393457 2244 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:12:05.394399 kubelet[2244]: W0712 00:12:05.393836 2244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.93.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.93.35:6443: connect: connection refused Jul 12 00:12:05.394399 kubelet[2244]: E0712 00:12:05.393887 2244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.93.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.93.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:05.394399 kubelet[2244]: E0712 00:12:05.393947 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.93.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-51c90d58be?timeout=10s\": dial tcp 91.99.93.35:6443: connect: connection refused" interval="200ms" Jul 12 00:12:05.395218 kubelet[2244]: I0712 00:12:05.395189 2244 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:12:05.395290 kubelet[2244]: I0712 00:12:05.395276 2244 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:12:05.397551 kubelet[2244]: I0712 00:12:05.397434 2244 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:12:05.412142 kubelet[2244]: I0712 00:12:05.412089 2244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:12:05.413911 kubelet[2244]: I0712 00:12:05.413560 2244 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:12:05.413911 kubelet[2244]: I0712 00:12:05.413585 2244 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:12:05.413911 kubelet[2244]: I0712 00:12:05.413607 2244 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:12:05.413911 kubelet[2244]: I0712 00:12:05.413616 2244 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:12:05.413911 kubelet[2244]: E0712 00:12:05.413665 2244 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:12:05.419366 kubelet[2244]: W0712 00:12:05.419296 2244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.93.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.93.35:6443: connect: connection refused Jul 12 00:12:05.419673 kubelet[2244]: E0712 00:12:05.419583 2244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.93.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.93.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:05.427119 kubelet[2244]: I0712 00:12:05.427083 2244 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:12:05.427263 kubelet[2244]: I0712 00:12:05.427250 2244 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:12:05.427606 kubelet[2244]: I0712 00:12:05.427337 2244 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:12:05.430397 kubelet[2244]: I0712 00:12:05.430094 2244 policy_none.go:49] "None policy: Start" Jul 12 00:12:05.430397 kubelet[2244]: I0712 00:12:05.430123 2244 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:12:05.430397 kubelet[2244]: I0712 00:12:05.430137 2244 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:12:05.437410 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:12:05.451175 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:12:05.456345 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:12:05.469037 kubelet[2244]: I0712 00:12:05.468989 2244 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:12:05.469503 kubelet[2244]: I0712 00:12:05.469344 2244 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:12:05.469503 kubelet[2244]: I0712 00:12:05.469375 2244 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:12:05.470613 kubelet[2244]: I0712 00:12:05.470150 2244 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:12:05.473196 kubelet[2244]: E0712 00:12:05.473154 2244 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:12:05.473339 kubelet[2244]: E0712 00:12:05.473236 2244 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-4-n-51c90d58be\" not found" Jul 12 00:12:05.528325 systemd[1]: Created slice kubepods-burstable-poda6335b3a06a352424e342a858ed4c06e.slice - libcontainer container kubepods-burstable-poda6335b3a06a352424e342a858ed4c06e.slice. Jul 12 00:12:05.538349 kubelet[2244]: E0712 00:12:05.538301 2244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-51c90d58be\" not found" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.541938 systemd[1]: Created slice kubepods-burstable-podc18202693942b078972bea2aef1902aa.slice - libcontainer container kubepods-burstable-podc18202693942b078972bea2aef1902aa.slice. Jul 12 00:12:05.551296 kubelet[2244]: E0712 00:12:05.550727 2244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-51c90d58be\" not found" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.554924 systemd[1]: Created slice kubepods-burstable-pod86d77cd511f5afb2474a3824c6308bec.slice - libcontainer container kubepods-burstable-pod86d77cd511f5afb2474a3824c6308bec.slice. Jul 12 00:12:05.556943 kubelet[2244]: E0712 00:12:05.556913 2244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-51c90d58be\" not found" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.572850 kubelet[2244]: I0712 00:12:05.572658 2244 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.574206 kubelet[2244]: E0712 00:12:05.573277 2244 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.93.35:6443/api/v1/nodes\": dial tcp 91.99.93.35:6443: connect: connection refused" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.594720 kubelet[2244]: I0712 00:12:05.594550 2244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6335b3a06a352424e342a858ed4c06e-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-n-51c90d58be\" (UID: \"a6335b3a06a352424e342a858ed4c06e\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.594720 kubelet[2244]: I0712 00:12:05.594610 2244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c18202693942b078972bea2aef1902aa-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" (UID: \"c18202693942b078972bea2aef1902aa\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.594720 kubelet[2244]: I0712 00:12:05.594647 2244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c18202693942b078972bea2aef1902aa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" (UID: \"c18202693942b078972bea2aef1902aa\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.594720 kubelet[2244]: I0712 00:12:05.594676 2244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c18202693942b078972bea2aef1902aa-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" (UID: \"c18202693942b078972bea2aef1902aa\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.594720 kubelet[2244]: I0712 00:12:05.594706 2244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c18202693942b078972bea2aef1902aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" (UID: \"c18202693942b078972bea2aef1902aa\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.595400 kubelet[2244]: I0712 00:12:05.594736 2244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86d77cd511f5afb2474a3824c6308bec-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-n-51c90d58be\" (UID: \"86d77cd511f5afb2474a3824c6308bec\") " pod="kube-system/kube-scheduler-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.595400 kubelet[2244]: I0712 00:12:05.594765 2244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6335b3a06a352424e342a858ed4c06e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-n-51c90d58be\" (UID: \"a6335b3a06a352424e342a858ed4c06e\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.595400 kubelet[2244]: I0712 00:12:05.594793 2244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6335b3a06a352424e342a858ed4c06e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-n-51c90d58be\" (UID: \"a6335b3a06a352424e342a858ed4c06e\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.595400 kubelet[2244]: I0712 00:12:05.594891 2244 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c18202693942b078972bea2aef1902aa-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" (UID: \"c18202693942b078972bea2aef1902aa\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.595400 kubelet[2244]: E0712 00:12:05.595157 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.93.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-51c90d58be?timeout=10s\": dial tcp 91.99.93.35:6443: connect: connection refused" interval="400ms" Jul 12 00:12:05.776241 kubelet[2244]: I0712 00:12:05.776198 2244 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.776608 kubelet[2244]: E0712 00:12:05.776565 2244 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.93.35:6443/api/v1/nodes\": dial tcp 91.99.93.35:6443: connect: connection refused" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:05.841689 containerd[1487]: time="2025-07-12T00:12:05.841232091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-n-51c90d58be,Uid:a6335b3a06a352424e342a858ed4c06e,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:05.852335 containerd[1487]: time="2025-07-12T00:12:05.852229709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-n-51c90d58be,Uid:c18202693942b078972bea2aef1902aa,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:05.858746 containerd[1487]: time="2025-07-12T00:12:05.858329604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-n-51c90d58be,Uid:86d77cd511f5afb2474a3824c6308bec,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:05.996344 kubelet[2244]: E0712 00:12:05.996271 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.93.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-51c90d58be?timeout=10s\": dial tcp 91.99.93.35:6443: connect: connection refused" interval="800ms" Jul 12 00:12:06.159403 kubelet[2244]: E0712 00:12:06.159155 2244 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.93.35:6443/api/v1/namespaces/default/events\": dial tcp 91.99.93.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-4-n-51c90d58be.185158949f74127e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-4-n-51c90d58be,UID:ci-4081-3-4-n-51c90d58be,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-n-51c90d58be,},FirstTimestamp:2025-07-12 00:12:05.37742195 +0000 UTC m=+1.666493822,LastTimestamp:2025-07-12 00:12:05.37742195 +0000 UTC m=+1.666493822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-n-51c90d58be,}" Jul 12 00:12:06.180152 kubelet[2244]: I0712 00:12:06.180095 2244 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:06.180576 kubelet[2244]: E0712 00:12:06.180531 2244 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.93.35:6443/api/v1/nodes\": dial tcp 91.99.93.35:6443: connect: connection refused" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:06.280527 kubelet[2244]: W0712 00:12:06.280367 2244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://91.99.93.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 91.99.93.35:6443: connect: connection refused Jul 12 00:12:06.280527 kubelet[2244]: E0712 00:12:06.280456 2244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://91.99.93.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.93.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:06.312780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275538633.mount: Deactivated successfully. Jul 12 00:12:06.320648 containerd[1487]: time="2025-07-12T00:12:06.319955732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:12:06.323986 containerd[1487]: time="2025-07-12T00:12:06.323910246Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jul 12 00:12:06.326075 containerd[1487]: time="2025-07-12T00:12:06.326002344Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:12:06.331520 containerd[1487]: time="2025-07-12T00:12:06.329690136Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:12:06.332420 containerd[1487]: time="2025-07-12T00:12:06.332334999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:12:06.333929 containerd[1487]: time="2025-07-12T00:12:06.333859933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:12:06.336243 containerd[1487]: time="2025-07-12T00:12:06.336199033Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:12:06.340990 containerd[1487]: time="2025-07-12T00:12:06.340066827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:12:06.340990 containerd[1487]: time="2025-07-12T00:12:06.340752233Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 499.349179ms" Jul 12 00:12:06.342177 containerd[1487]: time="2025-07-12T00:12:06.342125524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.688879ms" Jul 12 00:12:06.346988 containerd[1487]: time="2025-07-12T00:12:06.346937406Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 494.573656ms" Jul 12 00:12:06.445412 kubelet[2244]: W0712 00:12:06.445172 2244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://91.99.93.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 91.99.93.35:6443: connect: connection refused Jul 12 00:12:06.445412 kubelet[2244]: E0712 00:12:06.445234 2244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://91.99.93.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.93.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:06.483975 containerd[1487]: time="2025-07-12T00:12:06.483547274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:06.483975 containerd[1487]: time="2025-07-12T00:12:06.483611395Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:06.483975 containerd[1487]: time="2025-07-12T00:12:06.483627395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:06.483975 containerd[1487]: time="2025-07-12T00:12:06.483714195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:06.489087 containerd[1487]: time="2025-07-12T00:12:06.488062513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:06.489087 containerd[1487]: time="2025-07-12T00:12:06.488178914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:06.489087 containerd[1487]: time="2025-07-12T00:12:06.488210915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:06.489087 containerd[1487]: time="2025-07-12T00:12:06.488327316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:06.492648 containerd[1487]: time="2025-07-12T00:12:06.491099260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:06.492648 containerd[1487]: time="2025-07-12T00:12:06.491258541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:06.492648 containerd[1487]: time="2025-07-12T00:12:06.491448183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:06.492648 containerd[1487]: time="2025-07-12T00:12:06.491870466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:06.511009 systemd[1]: Started cri-containerd-dba9ea633a137c4814d76e193e6c5b52f125a0b0f9f1a675c0f6d240acbb05ab.scope - libcontainer container dba9ea633a137c4814d76e193e6c5b52f125a0b0f9f1a675c0f6d240acbb05ab. Jul 12 00:12:06.517132 systemd[1]: Started cri-containerd-eaee0c14e200b9ea8b3432b92326d8a868a575b6818f373e341a7da56c56bcdb.scope - libcontainer container eaee0c14e200b9ea8b3432b92326d8a868a575b6818f373e341a7da56c56bcdb. Jul 12 00:12:06.524294 systemd[1]: Started cri-containerd-120a9ee830a417e77442e19f07678b4e8dec424038e343d8fde393690764c149.scope - libcontainer container 120a9ee830a417e77442e19f07678b4e8dec424038e343d8fde393690764c149. Jul 12 00:12:06.565460 containerd[1487]: time="2025-07-12T00:12:06.565220304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-4-n-51c90d58be,Uid:a6335b3a06a352424e342a858ed4c06e,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaee0c14e200b9ea8b3432b92326d8a868a575b6818f373e341a7da56c56bcdb\"" Jul 12 00:12:06.574728 containerd[1487]: time="2025-07-12T00:12:06.574058981Z" level=info msg="CreateContainer within sandbox \"eaee0c14e200b9ea8b3432b92326d8a868a575b6818f373e341a7da56c56bcdb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:12:06.600332 containerd[1487]: time="2025-07-12T00:12:06.599637883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-4-n-51c90d58be,Uid:c18202693942b078972bea2aef1902aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"120a9ee830a417e77442e19f07678b4e8dec424038e343d8fde393690764c149\"" Jul 12 00:12:06.602424 containerd[1487]: time="2025-07-12T00:12:06.602382227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-4-n-51c90d58be,Uid:86d77cd511f5afb2474a3824c6308bec,Namespace:kube-system,Attempt:0,} returns sandbox id \"dba9ea633a137c4814d76e193e6c5b52f125a0b0f9f1a675c0f6d240acbb05ab\"" Jul 12 00:12:06.605109 containerd[1487]: time="2025-07-12T00:12:06.604999810Z" level=info msg="CreateContainer within sandbox \"120a9ee830a417e77442e19f07678b4e8dec424038e343d8fde393690764c149\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:12:06.605674 containerd[1487]: time="2025-07-12T00:12:06.605623575Z" level=info msg="CreateContainer within sandbox \"eaee0c14e200b9ea8b3432b92326d8a868a575b6818f373e341a7da56c56bcdb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e5100beddbe2befa0b1c6897165daed5f8f67366e6eb15a47361bad43c4edd1\"" Jul 12 00:12:06.606788 containerd[1487]: time="2025-07-12T00:12:06.606500023Z" level=info msg="StartContainer for \"3e5100beddbe2befa0b1c6897165daed5f8f67366e6eb15a47361bad43c4edd1\"" Jul 12 00:12:06.607834 containerd[1487]: time="2025-07-12T00:12:06.607770434Z" level=info msg="CreateContainer within sandbox \"dba9ea633a137c4814d76e193e6c5b52f125a0b0f9f1a675c0f6d240acbb05ab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:12:06.634254 containerd[1487]: time="2025-07-12T00:12:06.634174904Z" level=info msg="CreateContainer within sandbox \"120a9ee830a417e77442e19f07678b4e8dec424038e343d8fde393690764c149\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2824a094cbaa741ab5d3e19c61a6d59829af80bdecbc5b2d03f86ddc201c8e49\"" Jul 12 00:12:06.635392 containerd[1487]: time="2025-07-12T00:12:06.635277193Z" level=info msg="StartContainer for \"2824a094cbaa741ab5d3e19c61a6d59829af80bdecbc5b2d03f86ddc201c8e49\"" Jul 12 00:12:06.640973 containerd[1487]: time="2025-07-12T00:12:06.640921122Z" level=info msg="CreateContainer within sandbox \"dba9ea633a137c4814d76e193e6c5b52f125a0b0f9f1a675c0f6d240acbb05ab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"790b608f4b2735fdfcc4b52eda8d6e563eb8a45dde033c5ee44734e0e183dac9\"" Jul 12 00:12:06.641789 containerd[1487]: time="2025-07-12T00:12:06.641734569Z" level=info msg="StartContainer for \"790b608f4b2735fdfcc4b52eda8d6e563eb8a45dde033c5ee44734e0e183dac9\"" Jul 12 00:12:06.643277 systemd[1]: Started cri-containerd-3e5100beddbe2befa0b1c6897165daed5f8f67366e6eb15a47361bad43c4edd1.scope - libcontainer container 3e5100beddbe2befa0b1c6897165daed5f8f67366e6eb15a47361bad43c4edd1. Jul 12 00:12:06.677844 kubelet[2244]: W0712 00:12:06.677766 2244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://91.99.93.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 91.99.93.35:6443: connect: connection refused Jul 12 00:12:06.677981 kubelet[2244]: E0712 00:12:06.677855 2244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://91.99.93.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.93.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:06.678875 systemd[1]: Started cri-containerd-790b608f4b2735fdfcc4b52eda8d6e563eb8a45dde033c5ee44734e0e183dac9.scope - libcontainer container 790b608f4b2735fdfcc4b52eda8d6e563eb8a45dde033c5ee44734e0e183dac9. Jul 12 00:12:06.693844 systemd[1]: Started cri-containerd-2824a094cbaa741ab5d3e19c61a6d59829af80bdecbc5b2d03f86ddc201c8e49.scope - libcontainer container 2824a094cbaa741ab5d3e19c61a6d59829af80bdecbc5b2d03f86ddc201c8e49. Jul 12 00:12:06.745181 containerd[1487]: time="2025-07-12T00:12:06.744952227Z" level=info msg="StartContainer for \"790b608f4b2735fdfcc4b52eda8d6e563eb8a45dde033c5ee44734e0e183dac9\" returns successfully" Jul 12 00:12:06.746583 containerd[1487]: time="2025-07-12T00:12:06.745718073Z" level=info msg="StartContainer for \"3e5100beddbe2befa0b1c6897165daed5f8f67366e6eb15a47361bad43c4edd1\" returns successfully" Jul 12 00:12:06.781229 containerd[1487]: time="2025-07-12T00:12:06.781079381Z" level=info msg="StartContainer for \"2824a094cbaa741ab5d3e19c61a6d59829af80bdecbc5b2d03f86ddc201c8e49\" returns successfully" Jul 12 00:12:06.797515 kubelet[2244]: E0712 00:12:06.797444 2244 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.93.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-4-n-51c90d58be?timeout=10s\": dial tcp 91.99.93.35:6443: connect: connection refused" interval="1.6s" Jul 12 00:12:06.839349 kubelet[2244]: W0712 00:12:06.839253 2244 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://91.99.93.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-51c90d58be&limit=500&resourceVersion=0": dial tcp 91.99.93.35:6443: connect: connection refused Jul 12 00:12:06.839349 kubelet[2244]: E0712 00:12:06.839335 2244 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://91.99.93.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-4-n-51c90d58be&limit=500&resourceVersion=0\": dial tcp 91.99.93.35:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:12:06.983757 kubelet[2244]: I0712 00:12:06.983720 2244 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:07.434219 kubelet[2244]: E0712 00:12:07.433759 2244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-51c90d58be\" not found" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:07.438253 kubelet[2244]: E0712 00:12:07.437963 2244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-51c90d58be\" not found" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:07.439392 kubelet[2244]: E0712 00:12:07.439365 2244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-51c90d58be\" not found" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:08.440764 kubelet[2244]: E0712 00:12:08.440731 2244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-51c90d58be\" not found" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:08.441118 kubelet[2244]: E0712 00:12:08.441001 2244 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-4-n-51c90d58be\" not found" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:09.564682 kubelet[2244]: E0712 00:12:09.564637 2244 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-4-n-51c90d58be\" not found" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:09.683874 kubelet[2244]: I0712 00:12:09.683829 2244 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:09.683874 kubelet[2244]: E0712 00:12:09.683871 2244 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-4-n-51c90d58be\": node \"ci-4081-3-4-n-51c90d58be\" not found" Jul 12 00:12:09.734193 kubelet[2244]: E0712 00:12:09.734148 2244 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-51c90d58be\" not found" Jul 12 00:12:09.793518 kubelet[2244]: I0712 00:12:09.793466 2244 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:09.823312 kubelet[2244]: E0712 00:12:09.823187 2244 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-4-n-51c90d58be\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:09.823312 kubelet[2244]: I0712 00:12:09.823222 2244 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:09.827999 kubelet[2244]: E0712 00:12:09.827956 2244 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-n-51c90d58be\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:09.827999 kubelet[2244]: I0712 00:12:09.827995 2244 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:09.831357 kubelet[2244]: E0712 00:12:09.831321 2244 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:10.376274 kubelet[2244]: I0712 00:12:10.376158 2244 apiserver.go:52] "Watching apiserver" Jul 12 00:12:10.394320 kubelet[2244]: I0712 00:12:10.394235 2244 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:12:12.148595 systemd[1]: Reloading requested from client PID 2521 ('systemctl') (unit session-7.scope)... Jul 12 00:12:12.148621 systemd[1]: Reloading... Jul 12 00:12:12.263176 zram_generator::config[2576]: No configuration found. Jul 12 00:12:12.346580 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:12:12.369650 kubelet[2244]: I0712 00:12:12.369594 2244 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.431505 systemd[1]: Reloading finished in 282 ms. Jul 12 00:12:12.470227 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:12.470747 kubelet[2244]: I0712 00:12:12.470214 2244 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:12:12.485660 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:12:12.485973 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:12.486036 systemd[1]: kubelet.service: Consumed 2.122s CPU time, 127.0M memory peak, 0B memory swap peak. Jul 12 00:12:12.494049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:12:12.633545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:12:12.649035 (kubelet)[2605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:12:12.714798 kubelet[2605]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:12:12.714798 kubelet[2605]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:12:12.714798 kubelet[2605]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:12:12.715183 kubelet[2605]: I0712 00:12:12.714782 2605 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:12:12.725714 kubelet[2605]: I0712 00:12:12.725676 2605 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:12:12.726538 kubelet[2605]: I0712 00:12:12.725893 2605 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:12:12.726538 kubelet[2605]: I0712 00:12:12.726313 2605 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:12:12.727731 kubelet[2605]: I0712 00:12:12.727706 2605 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:12:12.735774 kubelet[2605]: I0712 00:12:12.735320 2605 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:12:12.739691 kubelet[2605]: E0712 00:12:12.739620 2605 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:12:12.739947 kubelet[2605]: I0712 00:12:12.739925 2605 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:12:12.743280 kubelet[2605]: I0712 00:12:12.743232 2605 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:12:12.743876 kubelet[2605]: I0712 00:12:12.743816 2605 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:12:12.744151 kubelet[2605]: I0712 00:12:12.743961 2605 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-4-n-51c90d58be","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:12:12.744402 kubelet[2605]: I0712 00:12:12.744274 2605 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:12:12.744402 kubelet[2605]: I0712 00:12:12.744290 2605 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:12:12.744402 kubelet[2605]: I0712 00:12:12.744362 2605 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:12:12.744663 kubelet[2605]: I0712 00:12:12.744649 2605 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:12:12.744743 kubelet[2605]: I0712 00:12:12.744731 2605 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:12:12.744875 kubelet[2605]: I0712 00:12:12.744792 2605 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:12:12.744875 kubelet[2605]: I0712 00:12:12.744812 2605 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:12:12.751500 kubelet[2605]: I0712 00:12:12.748505 2605 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:12:12.751500 kubelet[2605]: I0712 00:12:12.749302 2605 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:12:12.751500 kubelet[2605]: I0712 00:12:12.750709 2605 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:12:12.751500 kubelet[2605]: I0712 00:12:12.750756 2605 server.go:1287] "Started kubelet" Jul 12 00:12:12.755699 kubelet[2605]: I0712 00:12:12.755649 2605 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:12:12.756067 kubelet[2605]: I0712 00:12:12.756051 2605 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:12:12.756205 kubelet[2605]: I0712 00:12:12.756186 2605 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:12:12.759493 kubelet[2605]: I0712 00:12:12.757153 2605 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:12:12.761723 kubelet[2605]: I0712 00:12:12.757503 2605 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:12:12.774011 kubelet[2605]: I0712 00:12:12.758660 2605 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:12:12.774281 kubelet[2605]: I0712 00:12:12.774265 2605 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:12:12.774736 kubelet[2605]: E0712 00:12:12.774709 2605 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-4-n-51c90d58be\" not found" Jul 12 00:12:12.784525 kubelet[2605]: I0712 00:12:12.782111 2605 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:12:12.785007 kubelet[2605]: I0712 00:12:12.784968 2605 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:12:12.794621 kubelet[2605]: I0712 00:12:12.794563 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:12:12.796058 kubelet[2605]: I0712 00:12:12.796025 2605 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:12:12.796248 kubelet[2605]: I0712 00:12:12.796233 2605 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:12:12.796353 kubelet[2605]: I0712 00:12:12.796341 2605 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:12:12.796648 kubelet[2605]: I0712 00:12:12.796632 2605 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:12:12.796756 kubelet[2605]: E0712 00:12:12.796737 2605 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:12:12.796856 kubelet[2605]: I0712 00:12:12.796812 2605 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:12:12.796953 kubelet[2605]: I0712 00:12:12.796929 2605 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:12:12.800725 kubelet[2605]: I0712 00:12:12.800687 2605 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:12:12.828271 kubelet[2605]: E0712 00:12:12.827357 2605 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:12:12.866437 kubelet[2605]: I0712 00:12:12.866149 2605 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:12:12.866437 kubelet[2605]: I0712 00:12:12.866172 2605 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:12:12.866437 kubelet[2605]: I0712 00:12:12.866194 2605 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:12:12.866437 kubelet[2605]: I0712 00:12:12.866353 2605 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:12:12.866437 kubelet[2605]: I0712 00:12:12.866362 2605 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:12:12.866437 kubelet[2605]: I0712 00:12:12.866379 2605 policy_none.go:49] "None policy: Start" Jul 12 00:12:12.866437 kubelet[2605]: I0712 00:12:12.866387 2605 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:12:12.866437 kubelet[2605]: I0712 00:12:12.866397 2605 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:12:12.867496 kubelet[2605]: I0712 00:12:12.866930 2605 state_mem.go:75] "Updated machine memory state" Jul 12 00:12:12.871338 kubelet[2605]: I0712 00:12:12.871308 2605 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:12:12.871554 kubelet[2605]: I0712 00:12:12.871531 2605 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:12:12.871648 kubelet[2605]: I0712 00:12:12.871553 2605 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:12:12.872753 kubelet[2605]: I0712 00:12:12.872728 2605 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:12:12.873783 kubelet[2605]: E0712 00:12:12.873752 2605 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:12:12.898920 kubelet[2605]: I0712 00:12:12.898110 2605 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.898920 kubelet[2605]: I0712 00:12:12.898234 2605 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.898920 kubelet[2605]: I0712 00:12:12.898691 2605 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.908795 kubelet[2605]: E0712 00:12:12.908445 2605 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-4-n-51c90d58be\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.978352 kubelet[2605]: I0712 00:12:12.976346 2605 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.986899 kubelet[2605]: I0712 00:12:12.986576 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c18202693942b078972bea2aef1902aa-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" (UID: \"c18202693942b078972bea2aef1902aa\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.986899 kubelet[2605]: I0712 00:12:12.986618 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c18202693942b078972bea2aef1902aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" (UID: \"c18202693942b078972bea2aef1902aa\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.986899 kubelet[2605]: I0712 00:12:12.986641 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86d77cd511f5afb2474a3824c6308bec-kubeconfig\") pod \"kube-scheduler-ci-4081-3-4-n-51c90d58be\" (UID: \"86d77cd511f5afb2474a3824c6308bec\") " pod="kube-system/kube-scheduler-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.986899 kubelet[2605]: I0712 00:12:12.986687 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a6335b3a06a352424e342a858ed4c06e-k8s-certs\") pod \"kube-apiserver-ci-4081-3-4-n-51c90d58be\" (UID: \"a6335b3a06a352424e342a858ed4c06e\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.986899 kubelet[2605]: I0712 00:12:12.986708 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c18202693942b078972bea2aef1902aa-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" (UID: \"c18202693942b078972bea2aef1902aa\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.987302 kubelet[2605]: I0712 00:12:12.986727 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c18202693942b078972bea2aef1902aa-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" (UID: \"c18202693942b078972bea2aef1902aa\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.987302 kubelet[2605]: I0712 00:12:12.986744 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a6335b3a06a352424e342a858ed4c06e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-4-n-51c90d58be\" (UID: \"a6335b3a06a352424e342a858ed4c06e\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.987302 kubelet[2605]: I0712 00:12:12.986760 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c18202693942b078972bea2aef1902aa-ca-certs\") pod \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" (UID: \"c18202693942b078972bea2aef1902aa\") " pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.987302 kubelet[2605]: I0712 00:12:12.986775 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a6335b3a06a352424e342a858ed4c06e-ca-certs\") pod \"kube-apiserver-ci-4081-3-4-n-51c90d58be\" (UID: \"a6335b3a06a352424e342a858ed4c06e\") " pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.989155 kubelet[2605]: I0712 00:12:12.989007 2605 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:12.989155 kubelet[2605]: I0712 00:12:12.989124 2605 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-4-n-51c90d58be" Jul 12 00:12:13.149206 sudo[2640]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:12:13.150220 sudo[2640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 12 00:12:13.659879 sudo[2640]: pam_unix(sudo:session): session closed for user root Jul 12 00:12:13.755191 kubelet[2605]: I0712 00:12:13.753091 2605 apiserver.go:52] "Watching apiserver" Jul 12 00:12:13.785904 kubelet[2605]: I0712 00:12:13.785845 2605 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:12:13.848327 kubelet[2605]: I0712 00:12:13.846941 2605 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:13.861656 kubelet[2605]: E0712 00:12:13.861612 2605 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-4-n-51c90d58be\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" Jul 12 00:12:13.894722 kubelet[2605]: I0712 00:12:13.894560 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-4-n-51c90d58be" podStartSLOduration=1.8945414299999999 podStartE2EDuration="1.89454143s" podCreationTimestamp="2025-07-12 00:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:12:13.877195624 +0000 UTC m=+1.221753806" watchObservedRunningTime="2025-07-12 00:12:13.89454143 +0000 UTC m=+1.239099612" Jul 12 00:12:13.909732 kubelet[2605]: I0712 00:12:13.909248 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-4-n-51c90d58be" podStartSLOduration=1.909101656 podStartE2EDuration="1.909101656s" podCreationTimestamp="2025-07-12 00:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:12:13.895176635 +0000 UTC m=+1.239734777" watchObservedRunningTime="2025-07-12 00:12:13.909101656 +0000 UTC m=+1.253659838" Jul 12 00:12:13.911069 kubelet[2605]: I0712 00:12:13.910377 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-4-n-51c90d58be" podStartSLOduration=1.910364345 podStartE2EDuration="1.910364345s" podCreationTimestamp="2025-07-12 00:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:12:13.910296304 +0000 UTC m=+1.254854526" watchObservedRunningTime="2025-07-12 00:12:13.910364345 +0000 UTC m=+1.254922527" Jul 12 00:12:15.986524 sudo[1766]: pam_unix(sudo:session): session closed for user root Jul 12 00:12:16.148932 sshd[1749]: pam_unix(sshd:session): session closed for user core Jul 12 00:12:16.154956 systemd[1]: sshd@6-91.99.93.35:22-139.178.68.195:47958.service: Deactivated successfully. Jul 12 00:12:16.159010 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:12:16.159187 systemd[1]: session-7.scope: Consumed 8.233s CPU time, 153.1M memory peak, 0B memory swap peak. Jul 12 00:12:16.160384 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:12:16.162037 systemd-logind[1458]: Removed session 7. Jul 12 00:12:16.570504 kubelet[2605]: I0712 00:12:16.570433 2605 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:12:16.571616 containerd[1487]: time="2025-07-12T00:12:16.571552926Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:12:16.572228 kubelet[2605]: I0712 00:12:16.571804 2605 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:12:17.432566 systemd[1]: Created slice kubepods-besteffort-pod3b606ded_1e2d_4e07_8f48_cd5c69b0895d.slice - libcontainer container kubepods-besteffort-pod3b606ded_1e2d_4e07_8f48_cd5c69b0895d.slice. Jul 12 00:12:17.470723 systemd[1]: Created slice kubepods-burstable-pod42bab5d5_0d24_4a23_a7cb_8f4b695235df.slice - libcontainer container kubepods-burstable-pod42bab5d5_0d24_4a23_a7cb_8f4b695235df.slice. Jul 12 00:12:17.519868 kubelet[2605]: I0712 00:12:17.519767 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b606ded-1e2d-4e07-8f48-cd5c69b0895d-xtables-lock\") pod \"kube-proxy-db8f9\" (UID: \"3b606ded-1e2d-4e07-8f48-cd5c69b0895d\") " pod="kube-system/kube-proxy-db8f9" Jul 12 00:12:17.520285 kubelet[2605]: I0712 00:12:17.520176 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b606ded-1e2d-4e07-8f48-cd5c69b0895d-lib-modules\") pod \"kube-proxy-db8f9\" (UID: \"3b606ded-1e2d-4e07-8f48-cd5c69b0895d\") " pod="kube-system/kube-proxy-db8f9" Jul 12 00:12:17.520285 kubelet[2605]: I0712 00:12:17.520223 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-bpf-maps\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521075 kubelet[2605]: I0712 00:12:17.520276 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-xtables-lock\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521075 kubelet[2605]: I0712 00:12:17.520320 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-host-proc-sys-net\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521075 kubelet[2605]: I0712 00:12:17.520350 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-host-proc-sys-kernel\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521075 kubelet[2605]: I0712 00:12:17.520421 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-hostproc\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521075 kubelet[2605]: I0712 00:12:17.520454 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-etc-cni-netd\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521075 kubelet[2605]: I0712 00:12:17.520496 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42bab5d5-0d24-4a23-a7cb-8f4b695235df-hubble-tls\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521296 kubelet[2605]: I0712 00:12:17.520639 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-run\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521296 kubelet[2605]: I0712 00:12:17.520666 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cni-path\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521296 kubelet[2605]: I0712 00:12:17.520712 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-config-path\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521296 kubelet[2605]: I0712 00:12:17.520742 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b606ded-1e2d-4e07-8f48-cd5c69b0895d-kube-proxy\") pod \"kube-proxy-db8f9\" (UID: \"3b606ded-1e2d-4e07-8f48-cd5c69b0895d\") " pod="kube-system/kube-proxy-db8f9" Jul 12 00:12:17.521296 kubelet[2605]: I0712 00:12:17.520766 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-lib-modules\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521296 kubelet[2605]: I0712 00:12:17.520792 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-cgroup\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521515 kubelet[2605]: I0712 00:12:17.520815 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42bab5d5-0d24-4a23-a7cb-8f4b695235df-clustermesh-secrets\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.521515 kubelet[2605]: I0712 00:12:17.520860 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84bkv\" (UniqueName: \"kubernetes.io/projected/3b606ded-1e2d-4e07-8f48-cd5c69b0895d-kube-api-access-84bkv\") pod \"kube-proxy-db8f9\" (UID: \"3b606ded-1e2d-4e07-8f48-cd5c69b0895d\") " pod="kube-system/kube-proxy-db8f9" Jul 12 00:12:17.521515 kubelet[2605]: I0712 00:12:17.520894 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5m4k\" (UniqueName: \"kubernetes.io/projected/42bab5d5-0d24-4a23-a7cb-8f4b695235df-kube-api-access-p5m4k\") pod \"cilium-dfqh6\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " pod="kube-system/cilium-dfqh6" Jul 12 00:12:17.614726 systemd[1]: Created slice kubepods-besteffort-poddffd0092_204d_4e86_b66d_c7726b4ebf1c.slice - libcontainer container kubepods-besteffort-poddffd0092_204d_4e86_b66d_c7726b4ebf1c.slice. Jul 12 00:12:17.621421 kubelet[2605]: I0712 00:12:17.621299 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dffd0092-204d-4e86-b66d-c7726b4ebf1c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-2lxst\" (UID: \"dffd0092-204d-4e86-b66d-c7726b4ebf1c\") " pod="kube-system/cilium-operator-6c4d7847fc-2lxst" Jul 12 00:12:17.623244 kubelet[2605]: I0712 00:12:17.622411 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws84b\" (UniqueName: \"kubernetes.io/projected/dffd0092-204d-4e86-b66d-c7726b4ebf1c-kube-api-access-ws84b\") pod \"cilium-operator-6c4d7847fc-2lxst\" (UID: \"dffd0092-204d-4e86-b66d-c7726b4ebf1c\") " pod="kube-system/cilium-operator-6c4d7847fc-2lxst" Jul 12 00:12:17.746884 containerd[1487]: time="2025-07-12T00:12:17.746729347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db8f9,Uid:3b606ded-1e2d-4e07-8f48-cd5c69b0895d,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:17.777956 containerd[1487]: time="2025-07-12T00:12:17.777211347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:17.777956 containerd[1487]: time="2025-07-12T00:12:17.777314747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:17.777956 containerd[1487]: time="2025-07-12T00:12:17.777331947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:17.777956 containerd[1487]: time="2025-07-12T00:12:17.777429268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:17.778387 containerd[1487]: time="2025-07-12T00:12:17.778321714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfqh6,Uid:42bab5d5-0d24-4a23-a7cb-8f4b695235df,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:17.802718 systemd[1]: Started cri-containerd-56245f006702ea3943bcb3d9e0055f7ea0f5f88eff03d0069a8bdeee8a55f5f0.scope - libcontainer container 56245f006702ea3943bcb3d9e0055f7ea0f5f88eff03d0069a8bdeee8a55f5f0. Jul 12 00:12:17.813888 containerd[1487]: time="2025-07-12T00:12:17.813560665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:17.814977 containerd[1487]: time="2025-07-12T00:12:17.814593192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:17.814977 containerd[1487]: time="2025-07-12T00:12:17.814641512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:17.814977 containerd[1487]: time="2025-07-12T00:12:17.814810353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:17.845173 containerd[1487]: time="2025-07-12T00:12:17.844215266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-db8f9,Uid:3b606ded-1e2d-4e07-8f48-cd5c69b0895d,Namespace:kube-system,Attempt:0,} returns sandbox id \"56245f006702ea3943bcb3d9e0055f7ea0f5f88eff03d0069a8bdeee8a55f5f0\"" Jul 12 00:12:17.850958 systemd[1]: Started cri-containerd-63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf.scope - libcontainer container 63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf. Jul 12 00:12:17.852254 containerd[1487]: time="2025-07-12T00:12:17.851692435Z" level=info msg="CreateContainer within sandbox \"56245f006702ea3943bcb3d9e0055f7ea0f5f88eff03d0069a8bdeee8a55f5f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:12:17.880173 containerd[1487]: time="2025-07-12T00:12:17.879738179Z" level=info msg="CreateContainer within sandbox \"56245f006702ea3943bcb3d9e0055f7ea0f5f88eff03d0069a8bdeee8a55f5f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"12ed64dc417c525d3e354394c4e71a1641dbf89ca60be23b1e5c6fa49204ab58\"" Jul 12 00:12:17.881586 containerd[1487]: time="2025-07-12T00:12:17.881550751Z" level=info msg="StartContainer for \"12ed64dc417c525d3e354394c4e71a1641dbf89ca60be23b1e5c6fa49204ab58\"" Jul 12 00:12:17.883678 containerd[1487]: time="2025-07-12T00:12:17.883630925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dfqh6,Uid:42bab5d5-0d24-4a23-a7cb-8f4b695235df,Namespace:kube-system,Attempt:0,} returns sandbox id \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\"" Jul 12 00:12:17.887308 containerd[1487]: time="2025-07-12T00:12:17.886573824Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:12:17.923375 containerd[1487]: time="2025-07-12T00:12:17.922093657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2lxst,Uid:dffd0092-204d-4e86-b66d-c7726b4ebf1c,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:17.923151 systemd[1]: Started cri-containerd-12ed64dc417c525d3e354394c4e71a1641dbf89ca60be23b1e5c6fa49204ab58.scope - libcontainer container 12ed64dc417c525d3e354394c4e71a1641dbf89ca60be23b1e5c6fa49204ab58. Jul 12 00:12:17.958920 containerd[1487]: time="2025-07-12T00:12:17.957400529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:17.958920 containerd[1487]: time="2025-07-12T00:12:17.958659057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:17.958920 containerd[1487]: time="2025-07-12T00:12:17.958678857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:17.958920 containerd[1487]: time="2025-07-12T00:12:17.958770658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:17.998723 systemd[1]: Started cri-containerd-c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f.scope - libcontainer container c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f. Jul 12 00:12:18.013816 containerd[1487]: time="2025-07-12T00:12:18.013549455Z" level=info msg="StartContainer for \"12ed64dc417c525d3e354394c4e71a1641dbf89ca60be23b1e5c6fa49204ab58\" returns successfully" Jul 12 00:12:18.044916 containerd[1487]: time="2025-07-12T00:12:18.044408053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-2lxst,Uid:dffd0092-204d-4e86-b66d-c7726b4ebf1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\"" Jul 12 00:12:18.879731 kubelet[2605]: I0712 00:12:18.879591 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-db8f9" podStartSLOduration=1.8795466790000002 podStartE2EDuration="1.879546679s" podCreationTimestamp="2025-07-12 00:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:12:18.877970989 +0000 UTC m=+6.222529171" watchObservedRunningTime="2025-07-12 00:12:18.879546679 +0000 UTC m=+6.224104901" Jul 12 00:12:21.879108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248388930.mount: Deactivated successfully. Jul 12 00:12:23.383175 containerd[1487]: time="2025-07-12T00:12:23.382191608Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:23.384570 containerd[1487]: time="2025-07-12T00:12:23.384516621Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 12 00:12:23.386605 containerd[1487]: time="2025-07-12T00:12:23.386556833Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:23.389064 containerd[1487]: time="2025-07-12T00:12:23.389022967Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.502407663s" Jul 12 00:12:23.389181 containerd[1487]: time="2025-07-12T00:12:23.389164088Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:12:23.391922 containerd[1487]: time="2025-07-12T00:12:23.391694502Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:12:23.393402 containerd[1487]: time="2025-07-12T00:12:23.393352752Z" level=info msg="CreateContainer within sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:12:23.409385 containerd[1487]: time="2025-07-12T00:12:23.409302802Z" level=info msg="CreateContainer within sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02\"" Jul 12 00:12:23.411061 containerd[1487]: time="2025-07-12T00:12:23.411032972Z" level=info msg="StartContainer for \"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02\"" Jul 12 00:12:23.444437 systemd[1]: run-containerd-runc-k8s.io-f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02-runc.OIXqK0.mount: Deactivated successfully. Jul 12 00:12:23.457939 systemd[1]: Started cri-containerd-f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02.scope - libcontainer container f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02. Jul 12 00:12:23.498615 containerd[1487]: time="2025-07-12T00:12:23.498373268Z" level=info msg="StartContainer for \"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02\" returns successfully" Jul 12 00:12:23.515951 systemd[1]: cri-containerd-f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02.scope: Deactivated successfully. Jul 12 00:12:23.728416 containerd[1487]: time="2025-07-12T00:12:23.728106331Z" level=info msg="shim disconnected" id=f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02 namespace=k8s.io Jul 12 00:12:23.728416 containerd[1487]: time="2025-07-12T00:12:23.728200292Z" level=warning msg="cleaning up after shim disconnected" id=f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02 namespace=k8s.io Jul 12 00:12:23.728416 containerd[1487]: time="2025-07-12T00:12:23.728218652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:23.890761 containerd[1487]: time="2025-07-12T00:12:23.890620494Z" level=info msg="CreateContainer within sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:12:23.911322 containerd[1487]: time="2025-07-12T00:12:23.911253691Z" level=info msg="CreateContainer within sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6\"" Jul 12 00:12:23.912346 containerd[1487]: time="2025-07-12T00:12:23.912317257Z" level=info msg="StartContainer for \"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6\"" Jul 12 00:12:23.957859 systemd[1]: Started cri-containerd-dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6.scope - libcontainer container dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6. Jul 12 00:12:24.003281 containerd[1487]: time="2025-07-12T00:12:24.003069852Z" level=info msg="StartContainer for \"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6\" returns successfully" Jul 12 00:12:24.017040 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:12:24.017613 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:12:24.017690 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:12:24.026767 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:12:24.027001 systemd[1]: cri-containerd-dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6.scope: Deactivated successfully. Jul 12 00:12:24.050741 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:12:24.062548 containerd[1487]: time="2025-07-12T00:12:24.062128899Z" level=info msg="shim disconnected" id=dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6 namespace=k8s.io Jul 12 00:12:24.062548 containerd[1487]: time="2025-07-12T00:12:24.062319900Z" level=warning msg="cleaning up after shim disconnected" id=dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6 namespace=k8s.io Jul 12 00:12:24.062548 containerd[1487]: time="2025-07-12T00:12:24.062333660Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:24.406065 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02-rootfs.mount: Deactivated successfully. Jul 12 00:12:24.880821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1050016180.mount: Deactivated successfully. Jul 12 00:12:24.897872 containerd[1487]: time="2025-07-12T00:12:24.897799172Z" level=info msg="CreateContainer within sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:12:24.925233 containerd[1487]: time="2025-07-12T00:12:24.925168244Z" level=info msg="CreateContainer within sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02\"" Jul 12 00:12:24.926025 containerd[1487]: time="2025-07-12T00:12:24.925968528Z" level=info msg="StartContainer for \"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02\"" Jul 12 00:12:24.964953 systemd[1]: Started cri-containerd-8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02.scope - libcontainer container 8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02. Jul 12 00:12:25.017130 containerd[1487]: time="2025-07-12T00:12:25.017081471Z" level=info msg="StartContainer for \"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02\" returns successfully" Jul 12 00:12:25.021366 systemd[1]: cri-containerd-8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02.scope: Deactivated successfully. Jul 12 00:12:25.057507 containerd[1487]: time="2025-07-12T00:12:25.057416290Z" level=info msg="shim disconnected" id=8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02 namespace=k8s.io Jul 12 00:12:25.057507 containerd[1487]: time="2025-07-12T00:12:25.057470410Z" level=warning msg="cleaning up after shim disconnected" id=8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02 namespace=k8s.io Jul 12 00:12:25.057507 containerd[1487]: time="2025-07-12T00:12:25.057495210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:25.441264 containerd[1487]: time="2025-07-12T00:12:25.441164968Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:25.443159 containerd[1487]: time="2025-07-12T00:12:25.443091658Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 12 00:12:25.444041 containerd[1487]: time="2025-07-12T00:12:25.443571781Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:12:25.445779 containerd[1487]: time="2025-07-12T00:12:25.445635072Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.05389521s" Jul 12 00:12:25.445779 containerd[1487]: time="2025-07-12T00:12:25.445680152Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:12:25.450774 containerd[1487]: time="2025-07-12T00:12:25.450634619Z" level=info msg="CreateContainer within sandbox \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:12:25.472565 containerd[1487]: time="2025-07-12T00:12:25.472169776Z" level=info msg="CreateContainer within sandbox \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\"" Jul 12 00:12:25.473773 containerd[1487]: time="2025-07-12T00:12:25.472912060Z" level=info msg="StartContainer for \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\"" Jul 12 00:12:25.514793 systemd[1]: Started cri-containerd-345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9.scope - libcontainer container 345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9. Jul 12 00:12:25.550770 containerd[1487]: time="2025-07-12T00:12:25.550615081Z" level=info msg="StartContainer for \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\" returns successfully" Jul 12 00:12:25.910829 containerd[1487]: time="2025-07-12T00:12:25.910775711Z" level=info msg="CreateContainer within sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:12:25.928158 containerd[1487]: time="2025-07-12T00:12:25.928090245Z" level=info msg="CreateContainer within sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c\"" Jul 12 00:12:25.929025 containerd[1487]: time="2025-07-12T00:12:25.928990290Z" level=info msg="StartContainer for \"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c\"" Jul 12 00:12:25.975697 systemd[1]: Started cri-containerd-f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c.scope - libcontainer container f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c. Jul 12 00:12:25.981971 kubelet[2605]: I0712 00:12:25.981858 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-2lxst" podStartSLOduration=1.581144881 podStartE2EDuration="8.981819976s" podCreationTimestamp="2025-07-12 00:12:17 +0000 UTC" firstStartedPulling="2025-07-12 00:12:18.046361905 +0000 UTC m=+5.390920087" lastFinishedPulling="2025-07-12 00:12:25.447037 +0000 UTC m=+12.791595182" observedRunningTime="2025-07-12 00:12:25.924899228 +0000 UTC m=+13.269457410" watchObservedRunningTime="2025-07-12 00:12:25.981819976 +0000 UTC m=+13.326378158" Jul 12 00:12:26.016962 systemd[1]: cri-containerd-f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c.scope: Deactivated successfully. Jul 12 00:12:26.019731 containerd[1487]: time="2025-07-12T00:12:26.018690534Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42bab5d5_0d24_4a23_a7cb_8f4b695235df.slice/cri-containerd-f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c.scope/memory.events\": no such file or directory" Jul 12 00:12:26.025558 containerd[1487]: time="2025-07-12T00:12:26.024095482Z" level=info msg="StartContainer for \"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c\" returns successfully" Jul 12 00:12:26.092660 containerd[1487]: time="2025-07-12T00:12:26.092582565Z" level=info msg="shim disconnected" id=f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c namespace=k8s.io Jul 12 00:12:26.093579 containerd[1487]: time="2025-07-12T00:12:26.093532570Z" level=warning msg="cleaning up after shim disconnected" id=f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c namespace=k8s.io Jul 12 00:12:26.093579 containerd[1487]: time="2025-07-12T00:12:26.093567850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:12:26.404282 systemd[1]: run-containerd-runc-k8s.io-345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9-runc.VMHLmn.mount: Deactivated successfully. Jul 12 00:12:26.914145 containerd[1487]: time="2025-07-12T00:12:26.914020592Z" level=info msg="CreateContainer within sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:12:26.935880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1491530826.mount: Deactivated successfully. Jul 12 00:12:26.945968 containerd[1487]: time="2025-07-12T00:12:26.945789720Z" level=info msg="CreateContainer within sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\"" Jul 12 00:12:26.948436 containerd[1487]: time="2025-07-12T00:12:26.948386134Z" level=info msg="StartContainer for \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\"" Jul 12 00:12:26.987028 systemd[1]: Started cri-containerd-e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45.scope - libcontainer container e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45. Jul 12 00:12:27.031730 containerd[1487]: time="2025-07-12T00:12:27.030258444Z" level=info msg="StartContainer for \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\" returns successfully" Jul 12 00:12:27.157995 kubelet[2605]: I0712 00:12:27.157946 2605 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:12:27.207159 systemd[1]: Created slice kubepods-burstable-pod3fab884c_5d1e_4378_8115_bd60e8ecb8cb.slice - libcontainer container kubepods-burstable-pod3fab884c_5d1e_4378_8115_bd60e8ecb8cb.slice. Jul 12 00:12:27.209955 kubelet[2605]: I0712 00:12:27.209907 2605 status_manager.go:890] "Failed to get status for pod" podUID="3fab884c-5d1e-4378-8115-bd60e8ecb8cb" pod="kube-system/coredns-668d6bf9bc-dvzzm" err="pods \"coredns-668d6bf9bc-dvzzm\" is forbidden: User \"system:node:ci-4081-3-4-n-51c90d58be\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-4-n-51c90d58be' and this object" Jul 12 00:12:27.221171 systemd[1]: Created slice kubepods-burstable-podc5e2e052_b6de_4218_9c58_cc723434a748.slice - libcontainer container kubepods-burstable-podc5e2e052_b6de_4218_9c58_cc723434a748.slice. Jul 12 00:12:27.298059 kubelet[2605]: I0712 00:12:27.297877 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhm6j\" (UniqueName: \"kubernetes.io/projected/3fab884c-5d1e-4378-8115-bd60e8ecb8cb-kube-api-access-nhm6j\") pod \"coredns-668d6bf9bc-dvzzm\" (UID: \"3fab884c-5d1e-4378-8115-bd60e8ecb8cb\") " pod="kube-system/coredns-668d6bf9bc-dvzzm" Jul 12 00:12:27.298059 kubelet[2605]: I0712 00:12:27.297929 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jp5j\" (UniqueName: \"kubernetes.io/projected/c5e2e052-b6de-4218-9c58-cc723434a748-kube-api-access-6jp5j\") pod \"coredns-668d6bf9bc-kxvz6\" (UID: \"c5e2e052-b6de-4218-9c58-cc723434a748\") " pod="kube-system/coredns-668d6bf9bc-kxvz6" Jul 12 00:12:27.298059 kubelet[2605]: I0712 00:12:27.297950 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5e2e052-b6de-4218-9c58-cc723434a748-config-volume\") pod \"coredns-668d6bf9bc-kxvz6\" (UID: \"c5e2e052-b6de-4218-9c58-cc723434a748\") " pod="kube-system/coredns-668d6bf9bc-kxvz6" Jul 12 00:12:27.298059 kubelet[2605]: I0712 00:12:27.297972 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fab884c-5d1e-4378-8115-bd60e8ecb8cb-config-volume\") pod \"coredns-668d6bf9bc-dvzzm\" (UID: \"3fab884c-5d1e-4378-8115-bd60e8ecb8cb\") " pod="kube-system/coredns-668d6bf9bc-dvzzm" Jul 12 00:12:27.515314 containerd[1487]: time="2025-07-12T00:12:27.515180392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dvzzm,Uid:3fab884c-5d1e-4378-8115-bd60e8ecb8cb,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:27.526210 containerd[1487]: time="2025-07-12T00:12:27.525944687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kxvz6,Uid:c5e2e052-b6de-4218-9c58-cc723434a748,Namespace:kube-system,Attempt:0,}" Jul 12 00:12:29.309028 systemd-networkd[1360]: cilium_host: Link UP Jul 12 00:12:29.310675 systemd-networkd[1360]: cilium_net: Link UP Jul 12 00:12:29.311639 systemd-networkd[1360]: cilium_net: Gained carrier Jul 12 00:12:29.311802 systemd-networkd[1360]: cilium_host: Gained carrier Jul 12 00:12:29.445433 systemd-networkd[1360]: cilium_vxlan: Link UP Jul 12 00:12:29.445622 systemd-networkd[1360]: cilium_vxlan: Gained carrier Jul 12 00:12:29.743530 kernel: NET: Registered PF_ALG protocol family Jul 12 00:12:30.079101 systemd-networkd[1360]: cilium_net: Gained IPv6LL Jul 12 00:12:30.079700 systemd-networkd[1360]: cilium_host: Gained IPv6LL Jul 12 00:12:30.526310 systemd-networkd[1360]: lxc_health: Link UP Jul 12 00:12:30.552631 systemd-networkd[1360]: lxc_health: Gained carrier Jul 12 00:12:30.847252 systemd-networkd[1360]: cilium_vxlan: Gained IPv6LL Jul 12 00:12:31.091605 kernel: eth0: renamed from tmpa4f28 Jul 12 00:12:31.095727 systemd-networkd[1360]: lxc1d3f3833e372: Link UP Jul 12 00:12:31.097062 systemd-networkd[1360]: lxc1d3f3833e372: Gained carrier Jul 12 00:12:31.113435 systemd-networkd[1360]: lxc035efc1dfd56: Link UP Jul 12 00:12:31.124781 kernel: eth0: renamed from tmp24ed1 Jul 12 00:12:31.129090 systemd-networkd[1360]: lxc035efc1dfd56: Gained carrier Jul 12 00:12:31.804812 kubelet[2605]: I0712 00:12:31.804118 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dfqh6" podStartSLOduration=9.300194986 podStartE2EDuration="14.804089658s" podCreationTimestamp="2025-07-12 00:12:17 +0000 UTC" firstStartedPulling="2025-07-12 00:12:17.886129901 +0000 UTC m=+5.230688083" lastFinishedPulling="2025-07-12 00:12:23.390024573 +0000 UTC m=+10.734582755" observedRunningTime="2025-07-12 00:12:27.947108666 +0000 UTC m=+15.291666848" watchObservedRunningTime="2025-07-12 00:12:31.804089658 +0000 UTC m=+19.148647840" Jul 12 00:12:32.319094 systemd-networkd[1360]: lxc035efc1dfd56: Gained IPv6LL Jul 12 00:12:32.510682 systemd-networkd[1360]: lxc_health: Gained IPv6LL Jul 12 00:12:32.702805 systemd-networkd[1360]: lxc1d3f3833e372: Gained IPv6LL Jul 12 00:12:35.113013 containerd[1487]: time="2025-07-12T00:12:35.112888861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:35.113574 containerd[1487]: time="2025-07-12T00:12:35.113076182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:35.114035 containerd[1487]: time="2025-07-12T00:12:35.113639105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:35.118775 containerd[1487]: time="2025-07-12T00:12:35.118679566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:35.128505 containerd[1487]: time="2025-07-12T00:12:35.128373528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:12:35.128670 containerd[1487]: time="2025-07-12T00:12:35.128448609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:12:35.128670 containerd[1487]: time="2025-07-12T00:12:35.128468809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:35.128670 containerd[1487]: time="2025-07-12T00:12:35.128586569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:12:35.149699 systemd[1]: Started cri-containerd-24ed199e4f1e0154cb02d2fb4dcbc25e21f0ff5c5f313a6ef829d61da7db5d7d.scope - libcontainer container 24ed199e4f1e0154cb02d2fb4dcbc25e21f0ff5c5f313a6ef829d61da7db5d7d. Jul 12 00:12:35.171296 systemd[1]: run-containerd-runc-k8s.io-a4f28de591ed0a154c60c2c2f5b1c5ffee5073d5aa40ba289af8fefa9be176ea-runc.aDpwHO.mount: Deactivated successfully. Jul 12 00:12:35.181012 systemd[1]: Started cri-containerd-a4f28de591ed0a154c60c2c2f5b1c5ffee5073d5aa40ba289af8fefa9be176ea.scope - libcontainer container a4f28de591ed0a154c60c2c2f5b1c5ffee5073d5aa40ba289af8fefa9be176ea. Jul 12 00:12:35.222133 containerd[1487]: time="2025-07-12T00:12:35.222078295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kxvz6,Uid:c5e2e052-b6de-4218-9c58-cc723434a748,Namespace:kube-system,Attempt:0,} returns sandbox id \"24ed199e4f1e0154cb02d2fb4dcbc25e21f0ff5c5f313a6ef829d61da7db5d7d\"" Jul 12 00:12:35.227202 containerd[1487]: time="2025-07-12T00:12:35.227154797Z" level=info msg="CreateContainer within sandbox \"24ed199e4f1e0154cb02d2fb4dcbc25e21f0ff5c5f313a6ef829d61da7db5d7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:12:35.259864 containerd[1487]: time="2025-07-12T00:12:35.256276483Z" level=info msg="CreateContainer within sandbox \"24ed199e4f1e0154cb02d2fb4dcbc25e21f0ff5c5f313a6ef829d61da7db5d7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"07d0658bf944356a1128c6f798c53c9b6d31fe89ebb50dcfc5ae51236ee86400\"" Jul 12 00:12:35.259864 containerd[1487]: time="2025-07-12T00:12:35.257609769Z" level=info msg="StartContainer for \"07d0658bf944356a1128c6f798c53c9b6d31fe89ebb50dcfc5ae51236ee86400\"" Jul 12 00:12:35.269333 containerd[1487]: time="2025-07-12T00:12:35.269276180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dvzzm,Uid:3fab884c-5d1e-4378-8115-bd60e8ecb8cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4f28de591ed0a154c60c2c2f5b1c5ffee5073d5aa40ba289af8fefa9be176ea\"" Jul 12 00:12:35.276062 containerd[1487]: time="2025-07-12T00:12:35.275861728Z" level=info msg="CreateContainer within sandbox \"a4f28de591ed0a154c60c2c2f5b1c5ffee5073d5aa40ba289af8fefa9be176ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:12:35.301576 containerd[1487]: time="2025-07-12T00:12:35.300257474Z" level=info msg="CreateContainer within sandbox \"a4f28de591ed0a154c60c2c2f5b1c5ffee5073d5aa40ba289af8fefa9be176ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77d5fb25e276dfa0750b75c54de12bb3380a46ebdb732c12aef0d5421fb80061\"" Jul 12 00:12:35.301576 containerd[1487]: time="2025-07-12T00:12:35.301558520Z" level=info msg="StartContainer for \"77d5fb25e276dfa0750b75c54de12bb3380a46ebdb732c12aef0d5421fb80061\"" Jul 12 00:12:35.308255 systemd[1]: Started cri-containerd-07d0658bf944356a1128c6f798c53c9b6d31fe89ebb50dcfc5ae51236ee86400.scope - libcontainer container 07d0658bf944356a1128c6f798c53c9b6d31fe89ebb50dcfc5ae51236ee86400. Jul 12 00:12:35.347316 systemd[1]: Started cri-containerd-77d5fb25e276dfa0750b75c54de12bb3380a46ebdb732c12aef0d5421fb80061.scope - libcontainer container 77d5fb25e276dfa0750b75c54de12bb3380a46ebdb732c12aef0d5421fb80061. Jul 12 00:12:35.373399 containerd[1487]: time="2025-07-12T00:12:35.372822349Z" level=info msg="StartContainer for \"07d0658bf944356a1128c6f798c53c9b6d31fe89ebb50dcfc5ae51236ee86400\" returns successfully" Jul 12 00:12:35.404307 containerd[1487]: time="2025-07-12T00:12:35.403889164Z" level=info msg="StartContainer for \"77d5fb25e276dfa0750b75c54de12bb3380a46ebdb732c12aef0d5421fb80061\" returns successfully" Jul 12 00:12:35.967741 kubelet[2605]: I0712 00:12:35.967130 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kxvz6" podStartSLOduration=18.967108528 podStartE2EDuration="18.967108528s" podCreationTimestamp="2025-07-12 00:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:12:35.961935345 +0000 UTC m=+23.306493567" watchObservedRunningTime="2025-07-12 00:12:35.967108528 +0000 UTC m=+23.311666670" Jul 12 00:12:36.014976 kubelet[2605]: I0712 00:12:36.014912 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dvzzm" podStartSLOduration=19.014892814 podStartE2EDuration="19.014892814s" podCreationTimestamp="2025-07-12 00:12:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:12:36.012917925 +0000 UTC m=+23.357476147" watchObservedRunningTime="2025-07-12 00:12:36.014892814 +0000 UTC m=+23.359450996" Jul 12 00:12:37.567698 kubelet[2605]: I0712 00:12:37.567419 2605 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 12 00:14:42.520112 systemd[1]: Started sshd@7-91.99.93.35:22-139.178.68.195:34964.service - OpenSSH per-connection server daemon (139.178.68.195:34964). Jul 12 00:14:43.512306 sshd[4005]: Accepted publickey for core from 139.178.68.195 port 34964 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:43.515073 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:43.520347 systemd-logind[1458]: New session 8 of user core. Jul 12 00:14:43.524788 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:14:44.309194 sshd[4005]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:44.316159 systemd[1]: sshd@7-91.99.93.35:22-139.178.68.195:34964.service: Deactivated successfully. Jul 12 00:14:44.316391 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:14:44.318734 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:14:44.320229 systemd-logind[1458]: Removed session 8. Jul 12 00:14:49.481164 systemd[1]: Started sshd@8-91.99.93.35:22-139.178.68.195:47960.service - OpenSSH per-connection server daemon (139.178.68.195:47960). Jul 12 00:14:50.456885 sshd[4021]: Accepted publickey for core from 139.178.68.195 port 47960 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:50.459052 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:50.466612 systemd-logind[1458]: New session 9 of user core. Jul 12 00:14:50.472853 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:14:51.214990 sshd[4021]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:51.219456 systemd[1]: sshd@8-91.99.93.35:22-139.178.68.195:47960.service: Deactivated successfully. Jul 12 00:14:51.223788 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:14:51.226721 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:14:51.227934 systemd-logind[1458]: Removed session 9. Jul 12 00:14:56.395957 systemd[1]: Started sshd@9-91.99.93.35:22-139.178.68.195:47964.service - OpenSSH per-connection server daemon (139.178.68.195:47964). Jul 12 00:14:57.393326 sshd[4034]: Accepted publickey for core from 139.178.68.195 port 47964 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:57.395691 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:57.402264 systemd-logind[1458]: New session 10 of user core. Jul 12 00:14:57.405770 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:14:58.161375 sshd[4034]: pam_unix(sshd:session): session closed for user core Jul 12 00:14:58.167310 systemd[1]: sshd@9-91.99.93.35:22-139.178.68.195:47964.service: Deactivated successfully. Jul 12 00:14:58.170325 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:14:58.171922 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:14:58.173366 systemd-logind[1458]: Removed session 10. Jul 12 00:14:58.336885 systemd[1]: Started sshd@10-91.99.93.35:22-139.178.68.195:49706.service - OpenSSH per-connection server daemon (139.178.68.195:49706). Jul 12 00:14:59.332033 sshd[4048]: Accepted publickey for core from 139.178.68.195 port 49706 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:14:59.333949 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:14:59.340110 systemd-logind[1458]: New session 11 of user core. Jul 12 00:14:59.348846 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:15:00.137703 sshd[4048]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:00.143530 systemd[1]: sshd@10-91.99.93.35:22-139.178.68.195:49706.service: Deactivated successfully. Jul 12 00:15:00.146118 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:15:00.147235 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:15:00.149160 systemd-logind[1458]: Removed session 11. Jul 12 00:15:00.315961 systemd[1]: Started sshd@11-91.99.93.35:22-139.178.68.195:49720.service - OpenSSH per-connection server daemon (139.178.68.195:49720). Jul 12 00:15:01.291644 sshd[4059]: Accepted publickey for core from 139.178.68.195 port 49720 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:01.296096 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:01.304466 systemd-logind[1458]: New session 12 of user core. Jul 12 00:15:01.309856 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:15:02.060213 sshd[4059]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:02.063995 systemd[1]: sshd@11-91.99.93.35:22-139.178.68.195:49720.service: Deactivated successfully. Jul 12 00:15:02.067710 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:15:02.070207 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:15:02.071412 systemd-logind[1458]: Removed session 12. Jul 12 00:15:07.237961 systemd[1]: Started sshd@12-91.99.93.35:22-139.178.68.195:49730.service - OpenSSH per-connection server daemon (139.178.68.195:49730). Jul 12 00:15:08.232202 sshd[4073]: Accepted publickey for core from 139.178.68.195 port 49730 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:08.235105 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:08.243119 systemd-logind[1458]: New session 13 of user core. Jul 12 00:15:08.246733 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:15:08.998878 sshd[4073]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:09.009054 systemd[1]: sshd@12-91.99.93.35:22-139.178.68.195:49730.service: Deactivated successfully. Jul 12 00:15:09.013174 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:15:09.014809 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:15:09.016715 systemd-logind[1458]: Removed session 13. Jul 12 00:15:09.180968 systemd[1]: Started sshd@13-91.99.93.35:22-139.178.68.195:45434.service - OpenSSH per-connection server daemon (139.178.68.195:45434). Jul 12 00:15:10.188537 sshd[4086]: Accepted publickey for core from 139.178.68.195 port 45434 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:10.190775 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:10.197901 systemd-logind[1458]: New session 14 of user core. Jul 12 00:15:10.201823 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:15:11.002852 sshd[4086]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:11.008059 systemd[1]: sshd@13-91.99.93.35:22-139.178.68.195:45434.service: Deactivated successfully. Jul 12 00:15:11.010693 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:15:11.011732 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:15:11.013277 systemd-logind[1458]: Removed session 14. Jul 12 00:15:11.177018 systemd[1]: Started sshd@14-91.99.93.35:22-139.178.68.195:45438.service - OpenSSH per-connection server daemon (139.178.68.195:45438). Jul 12 00:15:12.155752 sshd[4096]: Accepted publickey for core from 139.178.68.195 port 45438 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:12.158704 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:12.165919 systemd-logind[1458]: New session 15 of user core. Jul 12 00:15:12.174344 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:15:13.899471 sshd[4096]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:13.905648 systemd[1]: sshd@14-91.99.93.35:22-139.178.68.195:45438.service: Deactivated successfully. Jul 12 00:15:13.908123 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:15:13.911099 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:15:13.913016 systemd-logind[1458]: Removed session 15. Jul 12 00:15:14.085052 systemd[1]: Started sshd@15-91.99.93.35:22-139.178.68.195:45450.service - OpenSSH per-connection server daemon (139.178.68.195:45450). Jul 12 00:15:15.079828 sshd[4116]: Accepted publickey for core from 139.178.68.195 port 45450 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:15.081362 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:15.088580 systemd-logind[1458]: New session 16 of user core. Jul 12 00:15:15.095823 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:15:15.982773 sshd[4116]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:15.988915 systemd[1]: sshd@15-91.99.93.35:22-139.178.68.195:45450.service: Deactivated successfully. Jul 12 00:15:15.991538 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:15:15.994371 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:15:15.995848 systemd-logind[1458]: Removed session 16. Jul 12 00:15:16.183965 systemd[1]: Started sshd@16-91.99.93.35:22-139.178.68.195:45456.service - OpenSSH per-connection server daemon (139.178.68.195:45456). Jul 12 00:15:17.242313 sshd[4127]: Accepted publickey for core from 139.178.68.195 port 45456 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:17.244613 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:17.251587 systemd-logind[1458]: New session 17 of user core. Jul 12 00:15:17.255722 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:15:18.051841 sshd[4127]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:18.059582 systemd[1]: sshd@16-91.99.93.35:22-139.178.68.195:45456.service: Deactivated successfully. Jul 12 00:15:18.062252 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:15:18.063188 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:15:18.065089 systemd-logind[1458]: Removed session 17. Jul 12 00:15:23.216992 systemd[1]: Started sshd@17-91.99.93.35:22-139.178.68.195:46486.service - OpenSSH per-connection server daemon (139.178.68.195:46486). Jul 12 00:15:24.199742 sshd[4144]: Accepted publickey for core from 139.178.68.195 port 46486 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:24.202666 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:24.210932 systemd-logind[1458]: New session 18 of user core. Jul 12 00:15:24.218799 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:15:24.954466 sshd[4144]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:24.959147 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:15:24.959382 systemd[1]: sshd@17-91.99.93.35:22-139.178.68.195:46486.service: Deactivated successfully. Jul 12 00:15:24.962271 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:15:24.965768 systemd-logind[1458]: Removed session 18. Jul 12 00:15:30.134885 systemd[1]: Started sshd@18-91.99.93.35:22-139.178.68.195:35096.service - OpenSSH per-connection server daemon (139.178.68.195:35096). Jul 12 00:15:31.142370 sshd[4157]: Accepted publickey for core from 139.178.68.195 port 35096 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:31.144422 sshd[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:31.150025 systemd-logind[1458]: New session 19 of user core. Jul 12 00:15:31.157838 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:15:31.910354 sshd[4157]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:31.915977 systemd[1]: sshd@18-91.99.93.35:22-139.178.68.195:35096.service: Deactivated successfully. Jul 12 00:15:31.919064 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:15:31.920268 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:15:31.921466 systemd-logind[1458]: Removed session 19. Jul 12 00:15:32.092938 systemd[1]: Started sshd@19-91.99.93.35:22-139.178.68.195:35098.service - OpenSSH per-connection server daemon (139.178.68.195:35098). Jul 12 00:15:33.101458 sshd[4169]: Accepted publickey for core from 139.178.68.195 port 35098 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:33.104418 sshd[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:33.112401 systemd-logind[1458]: New session 20 of user core. Jul 12 00:15:33.121987 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:15:35.720543 containerd[1487]: time="2025-07-12T00:15:35.720381171Z" level=info msg="StopContainer for \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\" with timeout 30 (s)" Jul 12 00:15:35.722145 containerd[1487]: time="2025-07-12T00:15:35.722067655Z" level=info msg="Stop container \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\" with signal terminated" Jul 12 00:15:35.733873 containerd[1487]: time="2025-07-12T00:15:35.733819042Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:15:35.739944 systemd[1]: cri-containerd-345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9.scope: Deactivated successfully. Jul 12 00:15:35.743539 containerd[1487]: time="2025-07-12T00:15:35.743442905Z" level=info msg="StopContainer for \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\" with timeout 2 (s)" Jul 12 00:15:35.744064 containerd[1487]: time="2025-07-12T00:15:35.744033826Z" level=info msg="Stop container \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\" with signal terminated" Jul 12 00:15:35.754021 systemd-networkd[1360]: lxc_health: Link DOWN Jul 12 00:15:35.754028 systemd-networkd[1360]: lxc_health: Lost carrier Jul 12 00:15:35.781057 systemd[1]: cri-containerd-e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45.scope: Deactivated successfully. Jul 12 00:15:35.782731 systemd[1]: cri-containerd-e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45.scope: Consumed 7.661s CPU time. Jul 12 00:15:35.792261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9-rootfs.mount: Deactivated successfully. Jul 12 00:15:35.803423 containerd[1487]: time="2025-07-12T00:15:35.803223684Z" level=info msg="shim disconnected" id=345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9 namespace=k8s.io Jul 12 00:15:35.803423 containerd[1487]: time="2025-07-12T00:15:35.803293204Z" level=warning msg="cleaning up after shim disconnected" id=345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9 namespace=k8s.io Jul 12 00:15:35.803423 containerd[1487]: time="2025-07-12T00:15:35.803312084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:15:35.822782 containerd[1487]: time="2025-07-12T00:15:35.822000168Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:15:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 12 00:15:35.822257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45-rootfs.mount: Deactivated successfully. Jul 12 00:15:35.824947 containerd[1487]: time="2025-07-12T00:15:35.824621494Z" level=info msg="shim disconnected" id=e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45 namespace=k8s.io Jul 12 00:15:35.824947 containerd[1487]: time="2025-07-12T00:15:35.824698335Z" level=warning msg="cleaning up after shim disconnected" id=e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45 namespace=k8s.io Jul 12 00:15:35.824947 containerd[1487]: time="2025-07-12T00:15:35.824707615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:15:35.831054 containerd[1487]: time="2025-07-12T00:15:35.830836229Z" level=info msg="StopContainer for \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\" returns successfully" Jul 12 00:15:35.832353 containerd[1487]: time="2025-07-12T00:15:35.832298952Z" level=info msg="StopPodSandbox for \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\"" Jul 12 00:15:35.832353 containerd[1487]: time="2025-07-12T00:15:35.832350312Z" level=info msg="Container to stop \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:15:35.835056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f-shm.mount: Deactivated successfully. Jul 12 00:15:35.851125 systemd[1]: cri-containerd-c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f.scope: Deactivated successfully. Jul 12 00:15:35.857628 containerd[1487]: time="2025-07-12T00:15:35.857559611Z" level=info msg="StopContainer for \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\" returns successfully" Jul 12 00:15:35.858860 containerd[1487]: time="2025-07-12T00:15:35.858806774Z" level=info msg="StopPodSandbox for \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\"" Jul 12 00:15:35.859124 containerd[1487]: time="2025-07-12T00:15:35.859084255Z" level=info msg="Container to stop \"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:15:35.859228 containerd[1487]: time="2025-07-12T00:15:35.859206175Z" level=info msg="Container to stop \"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:15:35.859328 containerd[1487]: time="2025-07-12T00:15:35.859307575Z" level=info msg="Container to stop \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:15:35.859426 containerd[1487]: time="2025-07-12T00:15:35.859404256Z" level=info msg="Container to stop \"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:15:35.859555 containerd[1487]: time="2025-07-12T00:15:35.859525736Z" level=info msg="Container to stop \"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:15:35.862776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf-shm.mount: Deactivated successfully. Jul 12 00:15:35.871175 systemd[1]: cri-containerd-63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf.scope: Deactivated successfully. Jul 12 00:15:35.901240 containerd[1487]: time="2025-07-12T00:15:35.900436392Z" level=info msg="shim disconnected" id=c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f namespace=k8s.io Jul 12 00:15:35.901240 containerd[1487]: time="2025-07-12T00:15:35.900938873Z" level=warning msg="cleaning up after shim disconnected" id=c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f namespace=k8s.io Jul 12 00:15:35.901240 containerd[1487]: time="2025-07-12T00:15:35.901179033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:15:35.915131 containerd[1487]: time="2025-07-12T00:15:35.914972186Z" level=info msg="shim disconnected" id=63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf namespace=k8s.io Jul 12 00:15:35.915131 containerd[1487]: time="2025-07-12T00:15:35.915086786Z" level=warning msg="cleaning up after shim disconnected" id=63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf namespace=k8s.io Jul 12 00:15:35.915131 containerd[1487]: time="2025-07-12T00:15:35.915097186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:15:35.925026 containerd[1487]: time="2025-07-12T00:15:35.924539728Z" level=info msg="TearDown network for sandbox \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\" successfully" Jul 12 00:15:35.925026 containerd[1487]: time="2025-07-12T00:15:35.924582808Z" level=info msg="StopPodSandbox for \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\" returns successfully" Jul 12 00:15:35.941517 containerd[1487]: time="2025-07-12T00:15:35.941420807Z" level=info msg="TearDown network for sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" successfully" Jul 12 00:15:35.941774 containerd[1487]: time="2025-07-12T00:15:35.941709968Z" level=info msg="StopPodSandbox for \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" returns successfully" Jul 12 00:15:36.121188 kubelet[2605]: I0712 00:15:36.120279 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-cgroup\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.121188 kubelet[2605]: I0712 00:15:36.120350 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-bpf-maps\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.121188 kubelet[2605]: I0712 00:15:36.120383 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42bab5d5-0d24-4a23-a7cb-8f4b695235df-clustermesh-secrets\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.121188 kubelet[2605]: I0712 00:15:36.120405 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cni-path\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.121188 kubelet[2605]: I0712 00:15:36.120433 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-config-path\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.121188 kubelet[2605]: I0712 00:15:36.120434 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:15:36.121974 kubelet[2605]: I0712 00:15:36.120462 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws84b\" (UniqueName: \"kubernetes.io/projected/dffd0092-204d-4e86-b66d-c7726b4ebf1c-kube-api-access-ws84b\") pod \"dffd0092-204d-4e86-b66d-c7726b4ebf1c\" (UID: \"dffd0092-204d-4e86-b66d-c7726b4ebf1c\") " Jul 12 00:15:36.121974 kubelet[2605]: I0712 00:15:36.120543 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-host-proc-sys-net\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.121974 kubelet[2605]: I0712 00:15:36.120554 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cni-path" (OuterVolumeSpecName: "cni-path") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:15:36.121974 kubelet[2605]: I0712 00:15:36.120565 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-host-proc-sys-kernel\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.121974 kubelet[2605]: I0712 00:15:36.120586 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-hostproc\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.122210 kubelet[2605]: I0712 00:15:36.120592 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:15:36.122210 kubelet[2605]: I0712 00:15:36.120607 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-run\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.122210 kubelet[2605]: I0712 00:15:36.120628 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-lib-modules\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.122210 kubelet[2605]: I0712 00:15:36.120654 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42bab5d5-0d24-4a23-a7cb-8f4b695235df-hubble-tls\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.122210 kubelet[2605]: I0712 00:15:36.120682 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dffd0092-204d-4e86-b66d-c7726b4ebf1c-cilium-config-path\") pod \"dffd0092-204d-4e86-b66d-c7726b4ebf1c\" (UID: \"dffd0092-204d-4e86-b66d-c7726b4ebf1c\") " Jul 12 00:15:36.122210 kubelet[2605]: I0712 00:15:36.120712 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-xtables-lock\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.122529 kubelet[2605]: I0712 00:15:36.120733 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-etc-cni-netd\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.122529 kubelet[2605]: I0712 00:15:36.120757 2605 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5m4k\" (UniqueName: \"kubernetes.io/projected/42bab5d5-0d24-4a23-a7cb-8f4b695235df-kube-api-access-p5m4k\") pod \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\" (UID: \"42bab5d5-0d24-4a23-a7cb-8f4b695235df\") " Jul 12 00:15:36.122529 kubelet[2605]: I0712 00:15:36.120807 2605 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-cgroup\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.122529 kubelet[2605]: I0712 00:15:36.120822 2605 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-bpf-maps\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.122529 kubelet[2605]: I0712 00:15:36.120835 2605 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cni-path\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.123542 kubelet[2605]: I0712 00:15:36.122994 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:15:36.124669 kubelet[2605]: I0712 00:15:36.124624 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:15:36.126186 kubelet[2605]: I0712 00:15:36.126048 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:15:36.126186 kubelet[2605]: I0712 00:15:36.126095 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:15:36.127108 kubelet[2605]: I0712 00:15:36.126855 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:15:36.127108 kubelet[2605]: I0712 00:15:36.126898 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:15:36.127108 kubelet[2605]: I0712 00:15:36.126916 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-hostproc" (OuterVolumeSpecName: "hostproc") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:15:36.129154 kubelet[2605]: I0712 00:15:36.128965 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42bab5d5-0d24-4a23-a7cb-8f4b695235df-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:15:36.129570 kubelet[2605]: I0712 00:15:36.129415 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dffd0092-204d-4e86-b66d-c7726b4ebf1c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dffd0092-204d-4e86-b66d-c7726b4ebf1c" (UID: "dffd0092-204d-4e86-b66d-c7726b4ebf1c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:15:36.130103 kubelet[2605]: I0712 00:15:36.130068 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42bab5d5-0d24-4a23-a7cb-8f4b695235df-kube-api-access-p5m4k" (OuterVolumeSpecName: "kube-api-access-p5m4k") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "kube-api-access-p5m4k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:15:36.132227 kubelet[2605]: I0712 00:15:36.132186 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:15:36.133972 kubelet[2605]: I0712 00:15:36.133877 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42bab5d5-0d24-4a23-a7cb-8f4b695235df-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "42bab5d5-0d24-4a23-a7cb-8f4b695235df" (UID: "42bab5d5-0d24-4a23-a7cb-8f4b695235df"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:15:36.133972 kubelet[2605]: I0712 00:15:36.133952 2605 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dffd0092-204d-4e86-b66d-c7726b4ebf1c-kube-api-access-ws84b" (OuterVolumeSpecName: "kube-api-access-ws84b") pod "dffd0092-204d-4e86-b66d-c7726b4ebf1c" (UID: "dffd0092-204d-4e86-b66d-c7726b4ebf1c"). InnerVolumeSpecName "kube-api-access-ws84b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:15:36.221895 kubelet[2605]: I0712 00:15:36.221776 2605 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/42bab5d5-0d24-4a23-a7cb-8f4b695235df-hubble-tls\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.221895 kubelet[2605]: I0712 00:15:36.221835 2605 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dffd0092-204d-4e86-b66d-c7726b4ebf1c-cilium-config-path\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.222167 kubelet[2605]: I0712 00:15:36.221995 2605 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-xtables-lock\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.222167 kubelet[2605]: I0712 00:15:36.222057 2605 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-etc-cni-netd\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.222167 kubelet[2605]: I0712 00:15:36.222110 2605 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p5m4k\" (UniqueName: \"kubernetes.io/projected/42bab5d5-0d24-4a23-a7cb-8f4b695235df-kube-api-access-p5m4k\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.222167 kubelet[2605]: I0712 00:15:36.222145 2605 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/42bab5d5-0d24-4a23-a7cb-8f4b695235df-clustermesh-secrets\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.222167 kubelet[2605]: I0712 00:15:36.222167 2605 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-config-path\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.222454 kubelet[2605]: I0712 00:15:36.222185 2605 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws84b\" (UniqueName: \"kubernetes.io/projected/dffd0092-204d-4e86-b66d-c7726b4ebf1c-kube-api-access-ws84b\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.222454 kubelet[2605]: I0712 00:15:36.222207 2605 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-host-proc-sys-net\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.222454 kubelet[2605]: I0712 00:15:36.222224 2605 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-host-proc-sys-kernel\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.222454 kubelet[2605]: I0712 00:15:36.222243 2605 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-hostproc\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.222454 kubelet[2605]: I0712 00:15:36.222277 2605 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-cilium-run\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.222454 kubelet[2605]: I0712 00:15:36.222299 2605 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42bab5d5-0d24-4a23-a7cb-8f4b695235df-lib-modules\") on node \"ci-4081-3-4-n-51c90d58be\" DevicePath \"\"" Jul 12 00:15:36.425395 kubelet[2605]: I0712 00:15:36.424133 2605 scope.go:117] "RemoveContainer" containerID="e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45" Jul 12 00:15:36.432045 containerd[1487]: time="2025-07-12T00:15:36.431517830Z" level=info msg="RemoveContainer for \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\"" Jul 12 00:15:36.435350 systemd[1]: Removed slice kubepods-burstable-pod42bab5d5_0d24_4a23_a7cb_8f4b695235df.slice - libcontainer container kubepods-burstable-pod42bab5d5_0d24_4a23_a7cb_8f4b695235df.slice. Jul 12 00:15:36.436225 systemd[1]: kubepods-burstable-pod42bab5d5_0d24_4a23_a7cb_8f4b695235df.slice: Consumed 7.759s CPU time. Jul 12 00:15:36.444704 containerd[1487]: time="2025-07-12T00:15:36.443726418Z" level=info msg="RemoveContainer for \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\" returns successfully" Jul 12 00:15:36.444892 kubelet[2605]: I0712 00:15:36.444147 2605 scope.go:117] "RemoveContainer" containerID="f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c" Jul 12 00:15:36.446609 containerd[1487]: time="2025-07-12T00:15:36.446184064Z" level=info msg="RemoveContainer for \"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c\"" Jul 12 00:15:36.449746 systemd[1]: Removed slice kubepods-besteffort-poddffd0092_204d_4e86_b66d_c7726b4ebf1c.slice - libcontainer container kubepods-besteffort-poddffd0092_204d_4e86_b66d_c7726b4ebf1c.slice. Jul 12 00:15:36.452937 containerd[1487]: time="2025-07-12T00:15:36.452747799Z" level=info msg="RemoveContainer for \"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c\" returns successfully" Jul 12 00:15:36.453532 kubelet[2605]: I0712 00:15:36.453393 2605 scope.go:117] "RemoveContainer" containerID="8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02" Jul 12 00:15:36.455514 containerd[1487]: time="2025-07-12T00:15:36.455347365Z" level=info msg="RemoveContainer for \"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02\"" Jul 12 00:15:36.463242 containerd[1487]: time="2025-07-12T00:15:36.462264221Z" level=info msg="RemoveContainer for \"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02\" returns successfully" Jul 12 00:15:36.464017 kubelet[2605]: I0712 00:15:36.463830 2605 scope.go:117] "RemoveContainer" containerID="dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6" Jul 12 00:15:36.468528 containerd[1487]: time="2025-07-12T00:15:36.468277355Z" level=info msg="RemoveContainer for \"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6\"" Jul 12 00:15:36.477801 containerd[1487]: time="2025-07-12T00:15:36.477741657Z" level=info msg="RemoveContainer for \"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6\" returns successfully" Jul 12 00:15:36.478306 kubelet[2605]: I0712 00:15:36.478221 2605 scope.go:117] "RemoveContainer" containerID="f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02" Jul 12 00:15:36.481548 containerd[1487]: time="2025-07-12T00:15:36.481354266Z" level=info msg="RemoveContainer for \"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02\"" Jul 12 00:15:36.490457 containerd[1487]: time="2025-07-12T00:15:36.490130566Z" level=info msg="RemoveContainer for \"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02\" returns successfully" Jul 12 00:15:36.490954 kubelet[2605]: I0712 00:15:36.490898 2605 scope.go:117] "RemoveContainer" containerID="e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45" Jul 12 00:15:36.491813 containerd[1487]: time="2025-07-12T00:15:36.491707970Z" level=error msg="ContainerStatus for \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\": not found" Jul 12 00:15:36.492410 kubelet[2605]: E0712 00:15:36.492009 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\": not found" containerID="e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45" Jul 12 00:15:36.492656 kubelet[2605]: I0712 00:15:36.492051 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45"} err="failed to get container status \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\": rpc error: code = NotFound desc = an error occurred when try to find container \"e051e1ae304b9a9caa7c876161e107568f303cdcba6466599ca6c52e179c1c45\": not found" Jul 12 00:15:36.492898 kubelet[2605]: I0712 00:15:36.492768 2605 scope.go:117] "RemoveContainer" containerID="f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c" Jul 12 00:15:36.493701 containerd[1487]: time="2025-07-12T00:15:36.493631974Z" level=error msg="ContainerStatus for \"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c\": not found" Jul 12 00:15:36.494051 kubelet[2605]: E0712 00:15:36.493987 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c\": not found" containerID="f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c" Jul 12 00:15:36.494051 kubelet[2605]: I0712 00:15:36.494028 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c"} err="failed to get container status \"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f013e7de3408acdebe8a3ee85827c3780f3d07a3a57bc17b09b5b5eb6200732c\": not found" Jul 12 00:15:36.494051 kubelet[2605]: I0712 00:15:36.494056 2605 scope.go:117] "RemoveContainer" containerID="8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02" Jul 12 00:15:36.494417 containerd[1487]: time="2025-07-12T00:15:36.494292016Z" level=error msg="ContainerStatus for \"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02\": not found" Jul 12 00:15:36.494626 kubelet[2605]: E0712 00:15:36.494606 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02\": not found" containerID="8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02" Jul 12 00:15:36.494895 kubelet[2605]: I0712 00:15:36.494743 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02"} err="failed to get container status \"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02\": rpc error: code = NotFound desc = an error occurred when try to find container \"8c760190d87427f4680122897cda1b65e0db0b73d8c4069fb304739d39bb3d02\": not found" Jul 12 00:15:36.494895 kubelet[2605]: I0712 00:15:36.494787 2605 scope.go:117] "RemoveContainer" containerID="dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6" Jul 12 00:15:36.495196 containerd[1487]: time="2025-07-12T00:15:36.495111298Z" level=error msg="ContainerStatus for \"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6\": not found" Jul 12 00:15:36.495306 kubelet[2605]: E0712 00:15:36.495253 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6\": not found" containerID="dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6" Jul 12 00:15:36.495357 kubelet[2605]: I0712 00:15:36.495301 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6"} err="failed to get container status \"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6\": rpc error: code = NotFound desc = an error occurred when try to find container \"dfd57452cb5b2104292fe2a291410452b76764878be7f45becf0422484a29ae6\": not found" Jul 12 00:15:36.495357 kubelet[2605]: I0712 00:15:36.495322 2605 scope.go:117] "RemoveContainer" containerID="f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02" Jul 12 00:15:36.495763 containerd[1487]: time="2025-07-12T00:15:36.495548819Z" level=error msg="ContainerStatus for \"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02\": not found" Jul 12 00:15:36.496101 kubelet[2605]: E0712 00:15:36.496072 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02\": not found" containerID="f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02" Jul 12 00:15:36.496399 kubelet[2605]: I0712 00:15:36.496209 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02"} err="failed to get container status \"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02\": rpc error: code = NotFound desc = an error occurred when try to find container \"f5b1be100549b65b6cdf79ebbe591afd2f7ed3ea5d38f97f73d849f6ba807c02\": not found" Jul 12 00:15:36.496399 kubelet[2605]: I0712 00:15:36.496238 2605 scope.go:117] "RemoveContainer" containerID="345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9" Jul 12 00:15:36.498247 containerd[1487]: time="2025-07-12T00:15:36.497983304Z" level=info msg="RemoveContainer for \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\"" Jul 12 00:15:36.502442 containerd[1487]: time="2025-07-12T00:15:36.502386995Z" level=info msg="RemoveContainer for \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\" returns successfully" Jul 12 00:15:36.503033 kubelet[2605]: I0712 00:15:36.502862 2605 scope.go:117] "RemoveContainer" containerID="345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9" Jul 12 00:15:36.503212 containerd[1487]: time="2025-07-12T00:15:36.503142796Z" level=error msg="ContainerStatus for \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\": not found" Jul 12 00:15:36.503325 kubelet[2605]: E0712 00:15:36.503287 2605 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\": not found" containerID="345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9" Jul 12 00:15:36.503416 kubelet[2605]: I0712 00:15:36.503392 2605 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9"} err="failed to get container status \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"345561a01c850d69240028b19b4962254bca7d0c586a2a1969fe11c9130f23b9\": not found" Jul 12 00:15:36.712706 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f-rootfs.mount: Deactivated successfully. Jul 12 00:15:36.712975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf-rootfs.mount: Deactivated successfully. Jul 12 00:15:36.713100 systemd[1]: var-lib-kubelet-pods-dffd0092\x2d204d\x2d4e86\x2db66d\x2dc7726b4ebf1c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dws84b.mount: Deactivated successfully. Jul 12 00:15:36.713230 systemd[1]: var-lib-kubelet-pods-42bab5d5\x2d0d24\x2d4a23\x2da7cb\x2d8f4b695235df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp5m4k.mount: Deactivated successfully. Jul 12 00:15:36.713388 systemd[1]: var-lib-kubelet-pods-42bab5d5\x2d0d24\x2d4a23\x2da7cb\x2d8f4b695235df-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:15:36.713583 systemd[1]: var-lib-kubelet-pods-42bab5d5\x2d0d24\x2d4a23\x2da7cb\x2d8f4b695235df-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:15:36.801353 kubelet[2605]: I0712 00:15:36.801299 2605 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42bab5d5-0d24-4a23-a7cb-8f4b695235df" path="/var/lib/kubelet/pods/42bab5d5-0d24-4a23-a7cb-8f4b695235df/volumes" Jul 12 00:15:36.802201 kubelet[2605]: I0712 00:15:36.802159 2605 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dffd0092-204d-4e86-b66d-c7726b4ebf1c" path="/var/lib/kubelet/pods/dffd0092-204d-4e86-b66d-c7726b4ebf1c/volumes" Jul 12 00:15:37.796250 sshd[4169]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:37.800897 systemd[1]: sshd@19-91.99.93.35:22-139.178.68.195:35098.service: Deactivated successfully. Jul 12 00:15:37.804354 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:15:37.804665 systemd[1]: session-20.scope: Consumed 1.435s CPU time. Jul 12 00:15:37.806632 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:15:37.808248 systemd-logind[1458]: Removed session 20. Jul 12 00:15:37.934266 kubelet[2605]: E0712 00:15:37.934180 2605 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:15:37.967273 systemd[1]: Started sshd@20-91.99.93.35:22-139.178.68.195:35102.service - OpenSSH per-connection server daemon (139.178.68.195:35102). Jul 12 00:15:38.948519 sshd[4326]: Accepted publickey for core from 139.178.68.195 port 35102 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:38.950625 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:38.956999 systemd-logind[1458]: New session 21 of user core. Jul 12 00:15:38.972153 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:15:40.870159 kubelet[2605]: I0712 00:15:40.870111 2605 memory_manager.go:355] "RemoveStaleState removing state" podUID="42bab5d5-0d24-4a23-a7cb-8f4b695235df" containerName="cilium-agent" Jul 12 00:15:40.870159 kubelet[2605]: I0712 00:15:40.870140 2605 memory_manager.go:355] "RemoveStaleState removing state" podUID="dffd0092-204d-4e86-b66d-c7726b4ebf1c" containerName="cilium-operator" Jul 12 00:15:40.891467 systemd[1]: Created slice kubepods-burstable-podacdab748_58e0_4d11_847c_d094a4757185.slice - libcontainer container kubepods-burstable-podacdab748_58e0_4d11_847c_d094a4757185.slice. Jul 12 00:15:41.027794 sshd[4326]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:41.032586 systemd[1]: sshd@20-91.99.93.35:22-139.178.68.195:35102.service: Deactivated successfully. Jul 12 00:15:41.032663 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:15:41.036098 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:15:41.036539 systemd[1]: session-21.scope: Consumed 1.263s CPU time. Jul 12 00:15:41.037676 systemd-logind[1458]: Removed session 21. Jul 12 00:15:41.054215 kubelet[2605]: I0712 00:15:41.054023 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj4px\" (UniqueName: \"kubernetes.io/projected/acdab748-58e0-4d11-847c-d094a4757185-kube-api-access-xj4px\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.054215 kubelet[2605]: I0712 00:15:41.054101 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acdab748-58e0-4d11-847c-d094a4757185-clustermesh-secrets\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.054773 kubelet[2605]: I0712 00:15:41.054218 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acdab748-58e0-4d11-847c-d094a4757185-bpf-maps\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.054773 kubelet[2605]: I0712 00:15:41.054303 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acdab748-58e0-4d11-847c-d094a4757185-cilium-config-path\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.054773 kubelet[2605]: I0712 00:15:41.054331 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acdab748-58e0-4d11-847c-d094a4757185-hubble-tls\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.054773 kubelet[2605]: I0712 00:15:41.054358 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acdab748-58e0-4d11-847c-d094a4757185-cilium-run\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.054773 kubelet[2605]: I0712 00:15:41.054382 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acdab748-58e0-4d11-847c-d094a4757185-cni-path\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.054773 kubelet[2605]: I0712 00:15:41.054406 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acdab748-58e0-4d11-847c-d094a4757185-etc-cni-netd\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.055093 kubelet[2605]: I0712 00:15:41.054429 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acdab748-58e0-4d11-847c-d094a4757185-lib-modules\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.055093 kubelet[2605]: I0712 00:15:41.054466 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/acdab748-58e0-4d11-847c-d094a4757185-cilium-ipsec-secrets\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.055093 kubelet[2605]: I0712 00:15:41.054628 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acdab748-58e0-4d11-847c-d094a4757185-hostproc\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.055093 kubelet[2605]: I0712 00:15:41.054683 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acdab748-58e0-4d11-847c-d094a4757185-host-proc-sys-kernel\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.055093 kubelet[2605]: I0712 00:15:41.054709 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acdab748-58e0-4d11-847c-d094a4757185-host-proc-sys-net\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.055631 kubelet[2605]: I0712 00:15:41.055417 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acdab748-58e0-4d11-847c-d094a4757185-cilium-cgroup\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.055631 kubelet[2605]: I0712 00:15:41.055552 2605 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acdab748-58e0-4d11-847c-d094a4757185-xtables-lock\") pod \"cilium-zg5rj\" (UID: \"acdab748-58e0-4d11-847c-d094a4757185\") " pod="kube-system/cilium-zg5rj" Jul 12 00:15:41.198003 containerd[1487]: time="2025-07-12T00:15:41.197878999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zg5rj,Uid:acdab748-58e0-4d11-847c-d094a4757185,Namespace:kube-system,Attempt:0,}" Jul 12 00:15:41.202699 systemd[1]: Started sshd@21-91.99.93.35:22-139.178.68.195:42810.service - OpenSSH per-connection server daemon (139.178.68.195:42810). Jul 12 00:15:41.231379 containerd[1487]: time="2025-07-12T00:15:41.231093275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:15:41.232660 containerd[1487]: time="2025-07-12T00:15:41.232295958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:15:41.232660 containerd[1487]: time="2025-07-12T00:15:41.232328038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:15:41.232660 containerd[1487]: time="2025-07-12T00:15:41.232537079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:15:41.260793 systemd[1]: Started cri-containerd-bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332.scope - libcontainer container bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332. Jul 12 00:15:41.292655 containerd[1487]: time="2025-07-12T00:15:41.292577616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zg5rj,Uid:acdab748-58e0-4d11-847c-d094a4757185,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332\"" Jul 12 00:15:41.302237 containerd[1487]: time="2025-07-12T00:15:41.302097838Z" level=info msg="CreateContainer within sandbox \"bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:15:41.317661 containerd[1487]: time="2025-07-12T00:15:41.317430193Z" level=info msg="CreateContainer within sandbox \"bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"06b9c7eb08113ced40f36cb3e8fd81953effbf368b7c7f5182c67bc8a810dafd\"" Jul 12 00:15:41.320408 containerd[1487]: time="2025-07-12T00:15:41.320090359Z" level=info msg="StartContainer for \"06b9c7eb08113ced40f36cb3e8fd81953effbf368b7c7f5182c67bc8a810dafd\"" Jul 12 00:15:41.351889 systemd[1]: Started cri-containerd-06b9c7eb08113ced40f36cb3e8fd81953effbf368b7c7f5182c67bc8a810dafd.scope - libcontainer container 06b9c7eb08113ced40f36cb3e8fd81953effbf368b7c7f5182c67bc8a810dafd. Jul 12 00:15:41.389138 containerd[1487]: time="2025-07-12T00:15:41.389082917Z" level=info msg="StartContainer for \"06b9c7eb08113ced40f36cb3e8fd81953effbf368b7c7f5182c67bc8a810dafd\" returns successfully" Jul 12 00:15:41.400710 systemd[1]: cri-containerd-06b9c7eb08113ced40f36cb3e8fd81953effbf368b7c7f5182c67bc8a810dafd.scope: Deactivated successfully. Jul 12 00:15:41.445356 containerd[1487]: time="2025-07-12T00:15:41.445226245Z" level=info msg="shim disconnected" id=06b9c7eb08113ced40f36cb3e8fd81953effbf368b7c7f5182c67bc8a810dafd namespace=k8s.io Jul 12 00:15:41.445356 containerd[1487]: time="2025-07-12T00:15:41.445299285Z" level=warning msg="cleaning up after shim disconnected" id=06b9c7eb08113ced40f36cb3e8fd81953effbf368b7c7f5182c67bc8a810dafd namespace=k8s.io Jul 12 00:15:41.445356 containerd[1487]: time="2025-07-12T00:15:41.445313285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:15:42.185837 sshd[4343]: Accepted publickey for core from 139.178.68.195 port 42810 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:42.188417 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:42.198514 systemd-logind[1458]: New session 22 of user core. Jul 12 00:15:42.199860 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:15:42.467933 containerd[1487]: time="2025-07-12T00:15:42.467743420Z" level=info msg="CreateContainer within sandbox \"bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:15:42.493006 containerd[1487]: time="2025-07-12T00:15:42.492924438Z" level=info msg="CreateContainer within sandbox \"bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"63c8d6ced09acd54d4086cbbe83860d4cd0cd18b31689e5b9dbad5421c513290\"" Jul 12 00:15:42.494125 containerd[1487]: time="2025-07-12T00:15:42.494074760Z" level=info msg="StartContainer for \"63c8d6ced09acd54d4086cbbe83860d4cd0cd18b31689e5b9dbad5421c513290\"" Jul 12 00:15:42.533798 systemd[1]: Started cri-containerd-63c8d6ced09acd54d4086cbbe83860d4cd0cd18b31689e5b9dbad5421c513290.scope - libcontainer container 63c8d6ced09acd54d4086cbbe83860d4cd0cd18b31689e5b9dbad5421c513290. Jul 12 00:15:42.570677 containerd[1487]: time="2025-07-12T00:15:42.570594335Z" level=info msg="StartContainer for \"63c8d6ced09acd54d4086cbbe83860d4cd0cd18b31689e5b9dbad5421c513290\" returns successfully" Jul 12 00:15:42.577757 systemd[1]: cri-containerd-63c8d6ced09acd54d4086cbbe83860d4cd0cd18b31689e5b9dbad5421c513290.scope: Deactivated successfully. Jul 12 00:15:42.606724 containerd[1487]: time="2025-07-12T00:15:42.606600657Z" level=info msg="shim disconnected" id=63c8d6ced09acd54d4086cbbe83860d4cd0cd18b31689e5b9dbad5421c513290 namespace=k8s.io Jul 12 00:15:42.607873 containerd[1487]: time="2025-07-12T00:15:42.607390939Z" level=warning msg="cleaning up after shim disconnected" id=63c8d6ced09acd54d4086cbbe83860d4cd0cd18b31689e5b9dbad5421c513290 namespace=k8s.io Jul 12 00:15:42.607873 containerd[1487]: time="2025-07-12T00:15:42.607423739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:15:42.623376 containerd[1487]: time="2025-07-12T00:15:42.622611333Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:15:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 12 00:15:42.864103 sshd[4343]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:42.869927 systemd[1]: sshd@21-91.99.93.35:22-139.178.68.195:42810.service: Deactivated successfully. Jul 12 00:15:42.872412 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:15:42.873360 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:15:42.875330 systemd-logind[1458]: Removed session 22. Jul 12 00:15:42.936102 kubelet[2605]: E0712 00:15:42.936031 2605 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:15:43.075012 systemd[1]: Started sshd@22-91.99.93.35:22-139.178.68.195:42818.service - OpenSSH per-connection server daemon (139.178.68.195:42818). Jul 12 00:15:43.161363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63c8d6ced09acd54d4086cbbe83860d4cd0cd18b31689e5b9dbad5421c513290-rootfs.mount: Deactivated successfully. Jul 12 00:15:43.472253 containerd[1487]: time="2025-07-12T00:15:43.472034386Z" level=info msg="CreateContainer within sandbox \"bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:15:43.500362 containerd[1487]: time="2025-07-12T00:15:43.498714087Z" level=info msg="CreateContainer within sandbox \"bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c0e09b7cde7f0a9834543a52c91dcadc787895180d679d6341eb3460f45c7cd8\"" Jul 12 00:15:43.500362 containerd[1487]: time="2025-07-12T00:15:43.499702049Z" level=info msg="StartContainer for \"c0e09b7cde7f0a9834543a52c91dcadc787895180d679d6341eb3460f45c7cd8\"" Jul 12 00:15:43.540792 systemd[1]: Started cri-containerd-c0e09b7cde7f0a9834543a52c91dcadc787895180d679d6341eb3460f45c7cd8.scope - libcontainer container c0e09b7cde7f0a9834543a52c91dcadc787895180d679d6341eb3460f45c7cd8. Jul 12 00:15:43.585139 containerd[1487]: time="2025-07-12T00:15:43.585084803Z" level=info msg="StartContainer for \"c0e09b7cde7f0a9834543a52c91dcadc787895180d679d6341eb3460f45c7cd8\" returns successfully" Jul 12 00:15:43.585953 systemd[1]: cri-containerd-c0e09b7cde7f0a9834543a52c91dcadc787895180d679d6341eb3460f45c7cd8.scope: Deactivated successfully. Jul 12 00:15:43.613559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0e09b7cde7f0a9834543a52c91dcadc787895180d679d6341eb3460f45c7cd8-rootfs.mount: Deactivated successfully. Jul 12 00:15:43.621602 containerd[1487]: time="2025-07-12T00:15:43.621310645Z" level=info msg="shim disconnected" id=c0e09b7cde7f0a9834543a52c91dcadc787895180d679d6341eb3460f45c7cd8 namespace=k8s.io Jul 12 00:15:43.621602 containerd[1487]: time="2025-07-12T00:15:43.621389045Z" level=warning msg="cleaning up after shim disconnected" id=c0e09b7cde7f0a9834543a52c91dcadc787895180d679d6341eb3460f45c7cd8 namespace=k8s.io Jul 12 00:15:43.621602 containerd[1487]: time="2025-07-12T00:15:43.621399645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:15:44.134417 sshd[4515]: Accepted publickey for core from 139.178.68.195 port 42818 ssh2: RSA SHA256:F+XLD192VdJplBwsaXiDmdHN61qgjd2kCMtCNVPlP/M Jul 12 00:15:44.137043 sshd[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:15:44.144214 systemd-logind[1458]: New session 23 of user core. Jul 12 00:15:44.153821 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:15:44.485572 containerd[1487]: time="2025-07-12T00:15:44.482952399Z" level=info msg="CreateContainer within sandbox \"bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:15:44.507189 containerd[1487]: time="2025-07-12T00:15:44.507128053Z" level=info msg="CreateContainer within sandbox \"bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d85ca3948557ff22eacf98200da7ce0d75e04f9a342086f3513edfa966025b79\"" Jul 12 00:15:44.510539 containerd[1487]: time="2025-07-12T00:15:44.509573059Z" level=info msg="StartContainer for \"d85ca3948557ff22eacf98200da7ce0d75e04f9a342086f3513edfa966025b79\"" Jul 12 00:15:44.557816 systemd[1]: Started cri-containerd-d85ca3948557ff22eacf98200da7ce0d75e04f9a342086f3513edfa966025b79.scope - libcontainer container d85ca3948557ff22eacf98200da7ce0d75e04f9a342086f3513edfa966025b79. Jul 12 00:15:44.588008 systemd[1]: cri-containerd-d85ca3948557ff22eacf98200da7ce0d75e04f9a342086f3513edfa966025b79.scope: Deactivated successfully. Jul 12 00:15:44.590981 containerd[1487]: time="2025-07-12T00:15:44.590288642Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacdab748_58e0_4d11_847c_d094a4757185.slice/cri-containerd-d85ca3948557ff22eacf98200da7ce0d75e04f9a342086f3513edfa966025b79.scope/memory.events\": no such file or directory" Jul 12 00:15:44.594836 containerd[1487]: time="2025-07-12T00:15:44.594785892Z" level=info msg="StartContainer for \"d85ca3948557ff22eacf98200da7ce0d75e04f9a342086f3513edfa966025b79\" returns successfully" Jul 12 00:15:44.628563 containerd[1487]: time="2025-07-12T00:15:44.628438288Z" level=info msg="shim disconnected" id=d85ca3948557ff22eacf98200da7ce0d75e04f9a342086f3513edfa966025b79 namespace=k8s.io Jul 12 00:15:44.628563 containerd[1487]: time="2025-07-12T00:15:44.628548848Z" level=warning msg="cleaning up after shim disconnected" id=d85ca3948557ff22eacf98200da7ce0d75e04f9a342086f3513edfa966025b79 namespace=k8s.io Jul 12 00:15:44.628563 containerd[1487]: time="2025-07-12T00:15:44.628560328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:15:45.487927 containerd[1487]: time="2025-07-12T00:15:45.487863070Z" level=info msg="CreateContainer within sandbox \"bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:15:45.496767 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d85ca3948557ff22eacf98200da7ce0d75e04f9a342086f3513edfa966025b79-rootfs.mount: Deactivated successfully. Jul 12 00:15:45.519449 containerd[1487]: time="2025-07-12T00:15:45.519295581Z" level=info msg="CreateContainer within sandbox \"bd95df7d68e22182a51de3ca0146aa5a07a1823b273cba7d13785eb6265f1332\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6181c6158c6ddb2462a74f2139c76b8fc5a9b368e7baffad739016fc98d2dc9b\"" Jul 12 00:15:45.520205 containerd[1487]: time="2025-07-12T00:15:45.520106623Z" level=info msg="StartContainer for \"6181c6158c6ddb2462a74f2139c76b8fc5a9b368e7baffad739016fc98d2dc9b\"" Jul 12 00:15:45.553904 systemd[1]: Started cri-containerd-6181c6158c6ddb2462a74f2139c76b8fc5a9b368e7baffad739016fc98d2dc9b.scope - libcontainer container 6181c6158c6ddb2462a74f2139c76b8fc5a9b368e7baffad739016fc98d2dc9b. Jul 12 00:15:45.597688 containerd[1487]: time="2025-07-12T00:15:45.597633758Z" level=info msg="StartContainer for \"6181c6158c6ddb2462a74f2139c76b8fc5a9b368e7baffad739016fc98d2dc9b\" returns successfully" Jul 12 00:15:45.943530 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 12 00:15:46.366142 kubelet[2605]: I0712 00:15:46.366057 2605 setters.go:602] "Node became not ready" node="ci-4081-3-4-n-51c90d58be" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:15:46Z","lastTransitionTime":"2025-07-12T00:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:15:49.090742 systemd[1]: run-containerd-runc-k8s.io-6181c6158c6ddb2462a74f2139c76b8fc5a9b368e7baffad739016fc98d2dc9b-runc.k74tNR.mount: Deactivated successfully. Jul 12 00:15:49.131977 systemd-networkd[1360]: lxc_health: Link UP Jul 12 00:15:49.148690 systemd-networkd[1360]: lxc_health: Gained carrier Jul 12 00:15:49.225737 kubelet[2605]: I0712 00:15:49.225661 2605 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zg5rj" podStartSLOduration=9.225642412 podStartE2EDuration="9.225642412s" podCreationTimestamp="2025-07-12 00:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:15:46.515419145 +0000 UTC m=+213.859977367" watchObservedRunningTime="2025-07-12 00:15:49.225642412 +0000 UTC m=+216.570200594" Jul 12 00:15:50.462732 systemd-networkd[1360]: lxc_health: Gained IPv6LL Jul 12 00:15:51.353224 systemd[1]: run-containerd-runc-k8s.io-6181c6158c6ddb2462a74f2139c76b8fc5a9b368e7baffad739016fc98d2dc9b-runc.tvctbd.mount: Deactivated successfully. Jul 12 00:15:55.970905 sshd[4515]: pam_unix(sshd:session): session closed for user core Jul 12 00:15:55.977557 systemd[1]: sshd@22-91.99.93.35:22-139.178.68.195:42818.service: Deactivated successfully. Jul 12 00:15:55.978270 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:15:55.981148 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:15:55.982449 systemd-logind[1458]: Removed session 23. Jul 12 00:16:10.677721 systemd[1]: cri-containerd-2824a094cbaa741ab5d3e19c61a6d59829af80bdecbc5b2d03f86ddc201c8e49.scope: Deactivated successfully. Jul 12 00:16:10.678678 systemd[1]: cri-containerd-2824a094cbaa741ab5d3e19c61a6d59829af80bdecbc5b2d03f86ddc201c8e49.scope: Consumed 4.973s CPU time, 19.0M memory peak, 0B memory swap peak. Jul 12 00:16:10.701701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2824a094cbaa741ab5d3e19c61a6d59829af80bdecbc5b2d03f86ddc201c8e49-rootfs.mount: Deactivated successfully. Jul 12 00:16:10.709792 containerd[1487]: time="2025-07-12T00:16:10.709598501Z" level=info msg="shim disconnected" id=2824a094cbaa741ab5d3e19c61a6d59829af80bdecbc5b2d03f86ddc201c8e49 namespace=k8s.io Jul 12 00:16:10.709792 containerd[1487]: time="2025-07-12T00:16:10.709695101Z" level=warning msg="cleaning up after shim disconnected" id=2824a094cbaa741ab5d3e19c61a6d59829af80bdecbc5b2d03f86ddc201c8e49 namespace=k8s.io Jul 12 00:16:10.709792 containerd[1487]: time="2025-07-12T00:16:10.709710021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:16:10.757354 kubelet[2605]: E0712 00:16:10.756313 2605 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:33422->10.0.0.2:2379: read: connection timed out" Jul 12 00:16:10.763475 systemd[1]: cri-containerd-790b608f4b2735fdfcc4b52eda8d6e563eb8a45dde033c5ee44734e0e183dac9.scope: Deactivated successfully. Jul 12 00:16:10.763764 systemd[1]: cri-containerd-790b608f4b2735fdfcc4b52eda8d6e563eb8a45dde033c5ee44734e0e183dac9.scope: Consumed 3.789s CPU time, 16.1M memory peak, 0B memory swap peak. Jul 12 00:16:10.784586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-790b608f4b2735fdfcc4b52eda8d6e563eb8a45dde033c5ee44734e0e183dac9-rootfs.mount: Deactivated successfully. Jul 12 00:16:10.796853 containerd[1487]: time="2025-07-12T00:16:10.796699244Z" level=info msg="shim disconnected" id=790b608f4b2735fdfcc4b52eda8d6e563eb8a45dde033c5ee44734e0e183dac9 namespace=k8s.io Jul 12 00:16:10.796853 containerd[1487]: time="2025-07-12T00:16:10.796819844Z" level=warning msg="cleaning up after shim disconnected" id=790b608f4b2735fdfcc4b52eda8d6e563eb8a45dde033c5ee44734e0e183dac9 namespace=k8s.io Jul 12 00:16:10.796853 containerd[1487]: time="2025-07-12T00:16:10.796846964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:16:10.812735 containerd[1487]: time="2025-07-12T00:16:10.812672357Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:16:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 12 00:16:11.567005 kubelet[2605]: I0712 00:16:11.566965 2605 scope.go:117] "RemoveContainer" containerID="2824a094cbaa741ab5d3e19c61a6d59829af80bdecbc5b2d03f86ddc201c8e49" Jul 12 00:16:11.569465 containerd[1487]: time="2025-07-12T00:16:11.569418064Z" level=info msg="CreateContainer within sandbox \"120a9ee830a417e77442e19f07678b4e8dec424038e343d8fde393690764c149\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 12 00:16:11.571055 kubelet[2605]: I0712 00:16:11.571020 2605 scope.go:117] "RemoveContainer" containerID="790b608f4b2735fdfcc4b52eda8d6e563eb8a45dde033c5ee44734e0e183dac9" Jul 12 00:16:11.573454 containerd[1487]: time="2025-07-12T00:16:11.573409712Z" level=info msg="CreateContainer within sandbox \"dba9ea633a137c4814d76e193e6c5b52f125a0b0f9f1a675c0f6d240acbb05ab\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 12 00:16:11.587624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2775420821.mount: Deactivated successfully. Jul 12 00:16:11.600580 containerd[1487]: time="2025-07-12T00:16:11.600424568Z" level=info msg="CreateContainer within sandbox \"120a9ee830a417e77442e19f07678b4e8dec424038e343d8fde393690764c149\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"56a9070a71ce0b9d1a8c6e4c09dea51d7e2e9cf42f896274b54c572cc0174377\"" Jul 12 00:16:11.601182 containerd[1487]: time="2025-07-12T00:16:11.600965890Z" level=info msg="StartContainer for \"56a9070a71ce0b9d1a8c6e4c09dea51d7e2e9cf42f896274b54c572cc0174377\"" Jul 12 00:16:11.607788 containerd[1487]: time="2025-07-12T00:16:11.607701424Z" level=info msg="CreateContainer within sandbox \"dba9ea633a137c4814d76e193e6c5b52f125a0b0f9f1a675c0f6d240acbb05ab\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6c6c1512841faef71c3d78dd63cb9aa2c3cfe3e644b083ef0eb734233681df9d\"" Jul 12 00:16:11.608445 containerd[1487]: time="2025-07-12T00:16:11.608352385Z" level=info msg="StartContainer for \"6c6c1512841faef71c3d78dd63cb9aa2c3cfe3e644b083ef0eb734233681df9d\"" Jul 12 00:16:11.637803 systemd[1]: Started cri-containerd-56a9070a71ce0b9d1a8c6e4c09dea51d7e2e9cf42f896274b54c572cc0174377.scope - libcontainer container 56a9070a71ce0b9d1a8c6e4c09dea51d7e2e9cf42f896274b54c572cc0174377. Jul 12 00:16:11.653839 systemd[1]: Started cri-containerd-6c6c1512841faef71c3d78dd63cb9aa2c3cfe3e644b083ef0eb734233681df9d.scope - libcontainer container 6c6c1512841faef71c3d78dd63cb9aa2c3cfe3e644b083ef0eb734233681df9d. Jul 12 00:16:11.692169 containerd[1487]: time="2025-07-12T00:16:11.692096161Z" level=info msg="StartContainer for \"56a9070a71ce0b9d1a8c6e4c09dea51d7e2e9cf42f896274b54c572cc0174377\" returns successfully" Jul 12 00:16:11.708926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46992592.mount: Deactivated successfully. Jul 12 00:16:11.720557 containerd[1487]: time="2025-07-12T00:16:11.720415020Z" level=info msg="StartContainer for \"6c6c1512841faef71c3d78dd63cb9aa2c3cfe3e644b083ef0eb734233681df9d\" returns successfully" Jul 12 00:16:12.840453 containerd[1487]: time="2025-07-12T00:16:12.840266921Z" level=info msg="StopPodSandbox for \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\"" Jul 12 00:16:12.840453 containerd[1487]: time="2025-07-12T00:16:12.840357282Z" level=info msg="TearDown network for sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" successfully" Jul 12 00:16:12.840453 containerd[1487]: time="2025-07-12T00:16:12.840379722Z" level=info msg="StopPodSandbox for \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" returns successfully" Jul 12 00:16:12.842843 containerd[1487]: time="2025-07-12T00:16:12.841054483Z" level=info msg="RemovePodSandbox for \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\"" Jul 12 00:16:12.842843 containerd[1487]: time="2025-07-12T00:16:12.841086243Z" level=info msg="Forcibly stopping sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\"" Jul 12 00:16:12.842843 containerd[1487]: time="2025-07-12T00:16:12.841132043Z" level=info msg="TearDown network for sandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" successfully" Jul 12 00:16:12.846613 containerd[1487]: time="2025-07-12T00:16:12.846556895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:16:12.846809 containerd[1487]: time="2025-07-12T00:16:12.846790775Z" level=info msg="RemovePodSandbox \"63564943b4d3b40769c1035e162b0223a475e7e929461acaf4ebbb9a89025eaf\" returns successfully" Jul 12 00:16:12.849644 containerd[1487]: time="2025-07-12T00:16:12.849620541Z" level=info msg="StopPodSandbox for \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\"" Jul 12 00:16:12.849816 containerd[1487]: time="2025-07-12T00:16:12.849799781Z" level=info msg="TearDown network for sandbox \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\" successfully" Jul 12 00:16:12.849901 containerd[1487]: time="2025-07-12T00:16:12.849886822Z" level=info msg="StopPodSandbox for \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\" returns successfully" Jul 12 00:16:12.850290 containerd[1487]: time="2025-07-12T00:16:12.850259422Z" level=info msg="RemovePodSandbox for \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\"" Jul 12 00:16:12.850338 containerd[1487]: time="2025-07-12T00:16:12.850291782Z" level=info msg="Forcibly stopping sandbox \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\"" Jul 12 00:16:12.850363 containerd[1487]: time="2025-07-12T00:16:12.850345703Z" level=info msg="TearDown network for sandbox \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\" successfully" Jul 12 00:16:12.855654 containerd[1487]: time="2025-07-12T00:16:12.855608834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 12 00:16:12.855767 containerd[1487]: time="2025-07-12T00:16:12.855694594Z" level=info msg="RemovePodSandbox \"c48d40b733ef244217da3f0e8b60b925de2a0c1dffdde2e8d12a4b2d0b97141f\" returns successfully" Jul 12 00:16:14.337704 kubelet[2605]: E0712 00:16:14.337517 2605 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:33222->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-4-n-51c90d58be.185158cc282187c2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-4-n-51c90d58be,UID:a6335b3a06a352424e342a858ed4c06e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-4-n-51c90d58be,},FirstTimestamp:2025-07-12 00:16:03.893692354 +0000 UTC m=+231.238250536,LastTimestamp:2025-07-12 00:16:03.893692354 +0000 UTC m=+231.238250536,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-4-n-51c90d58be,}"