May 17 00:11:37.897049 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 17 00:11:37.897082 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri May 16 22:39:35 -00 2025 May 17 00:11:37.897094 kernel: KASLR enabled May 17 00:11:37.897100 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II May 17 00:11:37.897105 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 May 17 00:11:37.897111 kernel: random: crng init done May 17 00:11:37.897119 kernel: ACPI: Early table checksum verification disabled May 17 00:11:37.897125 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) May 17 00:11:37.897132 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) May 17 00:11:37.897143 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:37.897150 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:37.897157 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:37.897164 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:37.897171 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:37.897179 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:37.897189 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:37.897197 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:37.897205 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:11:37.897212 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) May 17 00:11:37.897220 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 May 17 00:11:37.897227 kernel: NUMA: Failed to initialise from firmware May 17 00:11:37.897234 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:11:37.897242 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] May 17 00:11:37.897249 kernel: Zone ranges: May 17 00:11:37.897257 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] May 17 00:11:37.897265 kernel: DMA32 empty May 17 00:11:37.897272 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] May 17 00:11:37.897278 kernel: Movable zone start for each node May 17 00:11:37.897285 kernel: Early memory node ranges May 17 00:11:37.897291 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] May 17 00:11:37.897298 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] May 17 00:11:37.897304 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] May 17 00:11:37.897311 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] May 17 00:11:37.897317 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] May 17 00:11:37.897324 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] May 17 00:11:37.897331 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] May 17 00:11:37.897337 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] May 17 00:11:37.897345 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges May 17 00:11:37.897352 kernel: psci: probing for conduit method from ACPI. May 17 00:11:37.897358 kernel: psci: PSCIv1.1 detected in firmware. May 17 00:11:37.897368 kernel: psci: Using standard PSCI v0.2 function IDs May 17 00:11:37.897375 kernel: psci: Trusted OS migration not required May 17 00:11:37.897382 kernel: psci: SMC Calling Convention v1.1 May 17 00:11:37.897390 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 17 00:11:37.897398 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 17 00:11:37.897405 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 17 00:11:37.897412 kernel: pcpu-alloc: [0] 0 [0] 1 May 17 00:11:37.897419 kernel: Detected PIPT I-cache on CPU0 May 17 00:11:37.897426 kernel: CPU features: detected: GIC system register CPU interface May 17 00:11:37.897433 kernel: CPU features: detected: Hardware dirty bit management May 17 00:11:37.897439 kernel: CPU features: detected: Spectre-v4 May 17 00:11:37.897446 kernel: CPU features: detected: Spectre-BHB May 17 00:11:37.897453 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 00:11:37.897462 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 00:11:37.897469 kernel: CPU features: detected: ARM erratum 1418040 May 17 00:11:37.897476 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 00:11:37.897482 kernel: alternatives: applying boot alternatives May 17 00:11:37.897491 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:11:37.897498 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:11:37.897505 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:11:37.897512 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:11:37.897519 kernel: Fallback order for Node 0: 0 May 17 00:11:37.897526 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 May 17 00:11:37.897532 kernel: Policy zone: Normal May 17 00:11:37.897541 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:11:37.897548 kernel: software IO TLB: area num 2. May 17 00:11:37.897555 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) May 17 00:11:37.897563 kernel: Memory: 3882872K/4096000K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 213128K reserved, 0K cma-reserved) May 17 00:11:37.897570 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:11:37.897577 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:11:37.897584 kernel: rcu: RCU event tracing is enabled. May 17 00:11:37.897591 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:11:37.897598 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:11:37.897606 kernel: Tracing variant of Tasks RCU enabled. May 17 00:11:37.897613 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:11:37.897621 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:11:37.897628 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 00:11:37.897635 kernel: GICv3: 256 SPIs implemented May 17 00:11:37.897642 kernel: GICv3: 0 Extended SPIs implemented May 17 00:11:37.897649 kernel: Root IRQ handler: gic_handle_irq May 17 00:11:37.897657 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 17 00:11:37.897664 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 17 00:11:37.897671 kernel: ITS [mem 0x08080000-0x0809ffff] May 17 00:11:37.897678 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) May 17 00:11:37.897685 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) May 17 00:11:37.897692 kernel: GICv3: using LPI property table @0x00000001000e0000 May 17 00:11:37.897699 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 May 17 00:11:37.897708 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:11:37.897714 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:11:37.897721 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 17 00:11:37.897729 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 00:11:37.897736 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 00:11:37.897743 kernel: Console: colour dummy device 80x25 May 17 00:11:37.897750 kernel: ACPI: Core revision 20230628 May 17 00:11:37.897758 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 00:11:37.897765 kernel: pid_max: default: 32768 minimum: 301 May 17 00:11:37.897772 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:11:37.897781 kernel: landlock: Up and running. May 17 00:11:37.897788 kernel: SELinux: Initializing. May 17 00:11:37.897795 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:11:37.897803 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 17 00:11:37.897810 kernel: ACPI PPTT: PPTT table found, but unable to locate core 1 (1) May 17 00:11:37.897817 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:11:37.897825 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:11:37.897832 kernel: rcu: Hierarchical SRCU implementation. May 17 00:11:37.897855 kernel: rcu: Max phase no-delay instances is 400. May 17 00:11:37.897866 kernel: Platform MSI: ITS@0x8080000 domain created May 17 00:11:37.897873 kernel: PCI/MSI: ITS@0x8080000 domain created May 17 00:11:37.897881 kernel: Remapping and enabling EFI services. May 17 00:11:37.897888 kernel: smp: Bringing up secondary CPUs ... May 17 00:11:37.897895 kernel: Detected PIPT I-cache on CPU1 May 17 00:11:37.897903 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 17 00:11:37.897913 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 May 17 00:11:37.897920 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 00:11:37.897927 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 17 00:11:37.897936 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:11:37.897943 kernel: SMP: Total of 2 processors activated. May 17 00:11:37.897953 kernel: CPU features: detected: 32-bit EL0 Support May 17 00:11:37.897967 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 00:11:37.897990 kernel: CPU features: detected: Common not Private translations May 17 00:11:37.898000 kernel: CPU features: detected: CRC32 instructions May 17 00:11:37.898009 kernel: CPU features: detected: Enhanced Virtualization Traps May 17 00:11:37.898017 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 00:11:37.898026 kernel: CPU features: detected: LSE atomic instructions May 17 00:11:37.898035 kernel: CPU features: detected: Privileged Access Never May 17 00:11:37.898043 kernel: CPU features: detected: RAS Extension Support May 17 00:11:37.898054 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 00:11:37.898063 kernel: CPU: All CPU(s) started at EL1 May 17 00:11:37.898071 kernel: alternatives: applying system-wide alternatives May 17 00:11:37.898078 kernel: devtmpfs: initialized May 17 00:11:37.898086 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:11:37.898094 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:11:37.898106 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:11:37.898113 kernel: SMBIOS 3.0.0 present. May 17 00:11:37.898121 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 May 17 00:11:37.898131 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:11:37.898139 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 17 00:11:37.898147 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 00:11:37.898154 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 00:11:37.898162 kernel: audit: initializing netlink subsys (disabled) May 17 00:11:37.898170 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 May 17 00:11:37.898179 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:11:37.898187 kernel: cpuidle: using governor menu May 17 00:11:37.898194 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 00:11:37.898202 kernel: ASID allocator initialised with 32768 entries May 17 00:11:37.898209 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:11:37.898217 kernel: Serial: AMBA PL011 UART driver May 17 00:11:37.898224 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 17 00:11:37.898232 kernel: Modules: 0 pages in range for non-PLT usage May 17 00:11:37.898239 kernel: Modules: 509024 pages in range for PLT usage May 17 00:11:37.898248 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:11:37.898256 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:11:37.898264 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 17 00:11:37.898271 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 17 00:11:37.898278 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:11:37.898286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:11:37.898294 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 17 00:11:37.898301 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 17 00:11:37.898308 kernel: ACPI: Added _OSI(Module Device) May 17 00:11:37.898318 kernel: ACPI: Added _OSI(Processor Device) May 17 00:11:37.898326 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:11:37.898333 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:11:37.898341 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:11:37.898348 kernel: ACPI: Interpreter enabled May 17 00:11:37.898356 kernel: ACPI: Using GIC for interrupt routing May 17 00:11:37.898363 kernel: ACPI: MCFG table detected, 1 entries May 17 00:11:37.898371 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 17 00:11:37.898378 kernel: printk: console [ttyAMA0] enabled May 17 00:11:37.898387 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:11:37.898548 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:11:37.898629 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 17 00:11:37.898724 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 17 00:11:37.898800 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 17 00:11:37.898944 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 17 00:11:37.898960 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 17 00:11:37.899048 kernel: PCI host bridge to bus 0000:00 May 17 00:11:37.899163 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 17 00:11:37.899238 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 17 00:11:37.899304 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 17 00:11:37.899378 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:11:37.899487 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 17 00:11:37.899579 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 May 17 00:11:37.900073 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] May 17 00:11:37.900171 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:11:37.900256 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 17 00:11:37.900328 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] May 17 00:11:37.900406 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 17 00:11:37.900476 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] May 17 00:11:37.900562 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 17 00:11:37.900633 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] May 17 00:11:37.900712 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 17 00:11:37.900782 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] May 17 00:11:37.900878 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 17 00:11:37.900956 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] May 17 00:11:37.901093 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 17 00:11:37.901166 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] May 17 00:11:37.901240 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 17 00:11:37.901309 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] May 17 00:11:37.901384 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 17 00:11:37.901452 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] May 17 00:11:37.901537 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 17 00:11:37.901613 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] May 17 00:11:37.901692 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 May 17 00:11:37.901765 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] May 17 00:11:37.901893 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:11:37.902013 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] May 17 00:11:37.904228 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:11:37.904333 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:11:37.904425 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 17 00:11:37.904502 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] May 17 00:11:37.904583 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 17 00:11:37.904652 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] May 17 00:11:37.904719 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] May 17 00:11:37.904811 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 17 00:11:37.904942 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] May 17 00:11:37.905049 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 17 00:11:37.905121 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] May 17 00:11:37.905191 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] May 17 00:11:37.905267 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 17 00:11:37.905335 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] May 17 00:11:37.905411 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:11:37.905488 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:11:37.905559 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] May 17 00:11:37.905626 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] May 17 00:11:37.905699 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] May 17 00:11:37.905794 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 00:11:37.905900 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 May 17 00:11:37.905970 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 May 17 00:11:37.907237 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 00:11:37.907316 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 00:11:37.907389 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 May 17 00:11:37.907468 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 00:11:37.907535 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 May 17 00:11:37.907650 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 00:11:37.907817 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 00:11:37.907934 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 May 17 00:11:37.908059 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 00:11:37.908141 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 May 17 00:11:37.908211 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 May 17 00:11:37.908283 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 May 17 00:11:37.908365 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 May 17 00:11:37.908434 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 May 17 00:11:37.908500 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 May 17 00:11:37.908571 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 17 00:11:37.908638 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 May 17 00:11:37.908703 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 May 17 00:11:37.908773 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 17 00:11:37.908848 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 May 17 00:11:37.908931 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 May 17 00:11:37.910473 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 17 00:11:37.910570 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 May 17 00:11:37.910638 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 May 17 00:11:37.910709 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 00:11:37.910776 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:11:37.910871 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 00:11:37.910958 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:11:37.911052 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 00:11:37.911124 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:11:37.911193 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] May 17 00:11:37.911260 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:11:37.911331 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] May 17 00:11:37.911398 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:11:37.911472 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] May 17 00:11:37.911539 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:11:37.911610 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] May 17 00:11:37.911680 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:11:37.911749 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] May 17 00:11:37.911815 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:11:37.911898 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] May 17 00:11:37.911971 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:11:37.913823 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] May 17 00:11:37.913917 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] May 17 00:11:37.914631 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] May 17 00:11:37.914723 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] May 17 00:11:37.914796 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] May 17 00:11:37.914920 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] May 17 00:11:37.915113 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] May 17 00:11:37.915185 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] May 17 00:11:37.915255 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] May 17 00:11:37.915321 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] May 17 00:11:37.915389 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] May 17 00:11:37.915456 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] May 17 00:11:37.915525 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] May 17 00:11:37.915592 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] May 17 00:11:37.915674 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] May 17 00:11:37.915741 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] May 17 00:11:37.915811 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] May 17 00:11:37.915926 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] May 17 00:11:37.916816 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] May 17 00:11:37.916966 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] May 17 00:11:37.917068 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] May 17 00:11:37.917147 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] May 17 00:11:37.917227 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 17 00:11:37.917296 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] May 17 00:11:37.917367 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:11:37.917443 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] May 17 00:11:37.917530 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] May 17 00:11:37.917604 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:11:37.917692 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] May 17 00:11:37.917776 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:11:37.917859 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] May 17 00:11:37.917935 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] May 17 00:11:37.918027 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:11:37.918107 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] May 17 00:11:37.918182 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] May 17 00:11:37.918251 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:11:37.918319 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] May 17 00:11:37.918384 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] May 17 00:11:37.918450 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:11:37.918525 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] May 17 00:11:37.918593 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:11:37.918661 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] May 17 00:11:37.918731 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] May 17 00:11:37.918805 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:11:37.918907 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] May 17 00:11:37.919035 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] May 17 00:11:37.919117 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:11:37.919184 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] May 17 00:11:37.919252 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] May 17 00:11:37.919323 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:11:37.919403 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] May 17 00:11:37.919471 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] May 17 00:11:37.919540 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:11:37.919606 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] May 17 00:11:37.919671 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] May 17 00:11:37.919735 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:11:37.919809 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] May 17 00:11:37.919921 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] May 17 00:11:37.920225 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] May 17 00:11:37.920308 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:11:37.920374 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] May 17 00:11:37.920440 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] May 17 00:11:37.920504 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:11:37.920572 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:11:37.920636 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] May 17 00:11:37.920700 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] May 17 00:11:37.920772 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:11:37.920858 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:11:37.920932 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] May 17 00:11:37.921017 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] May 17 00:11:37.921091 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:11:37.921164 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 17 00:11:37.921221 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 17 00:11:37.921280 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 17 00:11:37.921357 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] May 17 00:11:37.921420 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] May 17 00:11:37.921483 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] May 17 00:11:37.921555 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] May 17 00:11:37.921627 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] May 17 00:11:37.921701 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] May 17 00:11:37.921785 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] May 17 00:11:37.921892 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] May 17 00:11:37.922035 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] May 17 00:11:37.922125 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] May 17 00:11:37.922189 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] May 17 00:11:37.922247 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] May 17 00:11:37.922313 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] May 17 00:11:37.922380 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] May 17 00:11:37.922442 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] May 17 00:11:37.922510 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] May 17 00:11:37.922574 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] May 17 00:11:37.922638 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] May 17 00:11:37.922707 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] May 17 00:11:37.922769 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] May 17 00:11:37.922833 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] May 17 00:11:37.922927 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] May 17 00:11:37.923019 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] May 17 00:11:37.923097 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] May 17 00:11:37.923179 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] May 17 00:11:37.923245 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] May 17 00:11:37.923306 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] May 17 00:11:37.923316 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 17 00:11:37.923324 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 17 00:11:37.923333 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 17 00:11:37.923341 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 17 00:11:37.923351 kernel: iommu: Default domain type: Translated May 17 00:11:37.923359 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 00:11:37.923368 kernel: efivars: Registered efivars operations May 17 00:11:37.923375 kernel: vgaarb: loaded May 17 00:11:37.923383 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 00:11:37.923393 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:11:37.923401 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:11:37.923409 kernel: pnp: PnP ACPI init May 17 00:11:37.923488 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 17 00:11:37.923503 kernel: pnp: PnP ACPI: found 1 devices May 17 00:11:37.923511 kernel: NET: Registered PF_INET protocol family May 17 00:11:37.923519 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:11:37.923527 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 17 00:11:37.923535 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:11:37.923544 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:11:37.923552 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 17 00:11:37.923568 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 17 00:11:37.923577 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:11:37.923588 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 17 00:11:37.923596 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:11:37.923680 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) May 17 00:11:37.923692 kernel: PCI: CLS 0 bytes, default 64 May 17 00:11:37.923700 kernel: kvm [1]: HYP mode not available May 17 00:11:37.923708 kernel: Initialise system trusted keyrings May 17 00:11:37.923716 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 17 00:11:37.923724 kernel: Key type asymmetric registered May 17 00:11:37.923735 kernel: Asymmetric key parser 'x509' registered May 17 00:11:37.923747 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 17 00:11:37.923756 kernel: io scheduler mq-deadline registered May 17 00:11:37.923767 kernel: io scheduler kyber registered May 17 00:11:37.923775 kernel: io scheduler bfq registered May 17 00:11:37.923784 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 May 17 00:11:37.923880 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 May 17 00:11:37.923961 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 May 17 00:11:37.924063 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:11:37.924136 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 May 17 00:11:37.924206 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 May 17 00:11:37.924276 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:11:37.924350 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 May 17 00:11:37.924423 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 May 17 00:11:37.924496 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:11:37.924591 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 May 17 00:11:37.924686 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 May 17 00:11:37.924758 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:11:37.924870 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 May 17 00:11:37.924966 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 May 17 00:11:37.926767 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:11:37.926860 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 May 17 00:11:37.926936 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 May 17 00:11:37.927126 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:11:37.927203 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 May 17 00:11:37.927270 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 May 17 00:11:37.927342 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:11:37.927411 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 May 17 00:11:37.927478 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 May 17 00:11:37.927542 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:11:37.927553 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 May 17 00:11:37.927619 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 May 17 00:11:37.927688 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 May 17 00:11:37.927755 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ May 17 00:11:37.927766 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 00:11:37.927775 kernel: ACPI: button: Power Button [PWRB] May 17 00:11:37.927783 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 17 00:11:37.927904 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) May 17 00:11:37.928016 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) May 17 00:11:37.928030 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:11:37.928038 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 May 17 00:11:37.928127 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) May 17 00:11:37.928139 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A May 17 00:11:37.928147 kernel: thunder_xcv, ver 1.0 May 17 00:11:37.928155 kernel: thunder_bgx, ver 1.0 May 17 00:11:37.928163 kernel: nicpf, ver 1.0 May 17 00:11:37.928171 kernel: nicvf, ver 1.0 May 17 00:11:37.928250 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 00:11:37.928313 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T00:11:37 UTC (1747440697) May 17 00:11:37.928327 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:11:37.928335 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 00:11:37.928344 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 17 00:11:37.928353 kernel: watchdog: Hard watchdog permanently disabled May 17 00:11:37.928361 kernel: NET: Registered PF_INET6 protocol family May 17 00:11:37.928369 kernel: Segment Routing with IPv6 May 17 00:11:37.928377 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:11:37.928388 kernel: NET: Registered PF_PACKET protocol family May 17 00:11:37.928397 kernel: Key type dns_resolver registered May 17 00:11:37.928409 kernel: registered taskstats version 1 May 17 00:11:37.928419 kernel: Loading compiled-in X.509 certificates May 17 00:11:37.928427 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 02f7129968574a1ae76b1ee42e7674ea1c42071b' May 17 00:11:37.928436 kernel: Key type .fscrypt registered May 17 00:11:37.928443 kernel: Key type fscrypt-provisioning registered May 17 00:11:37.928451 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:11:37.928460 kernel: ima: Allocated hash algorithm: sha1 May 17 00:11:37.928468 kernel: ima: No architecture policies found May 17 00:11:37.928476 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 00:11:37.928485 kernel: clk: Disabling unused clocks May 17 00:11:37.928493 kernel: Freeing unused kernel memory: 39424K May 17 00:11:37.928508 kernel: Run /init as init process May 17 00:11:37.928516 kernel: with arguments: May 17 00:11:37.928524 kernel: /init May 17 00:11:37.928532 kernel: with environment: May 17 00:11:37.928540 kernel: HOME=/ May 17 00:11:37.928548 kernel: TERM=linux May 17 00:11:37.928555 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:11:37.928567 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:11:37.928577 systemd[1]: Detected virtualization kvm. May 17 00:11:37.928586 systemd[1]: Detected architecture arm64. May 17 00:11:37.928594 systemd[1]: Running in initrd. May 17 00:11:37.928602 systemd[1]: No hostname configured, using default hostname. May 17 00:11:37.928611 systemd[1]: Hostname set to . May 17 00:11:37.928621 systemd[1]: Initializing machine ID from VM UUID. May 17 00:11:37.928631 systemd[1]: Queued start job for default target initrd.target. May 17 00:11:37.928639 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:11:37.928648 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:11:37.928657 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:11:37.928666 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:11:37.928674 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:11:37.928683 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:11:37.928694 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:11:37.928703 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:11:37.928711 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:11:37.928720 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:11:37.928728 systemd[1]: Reached target paths.target - Path Units. May 17 00:11:37.928736 systemd[1]: Reached target slices.target - Slice Units. May 17 00:11:37.928745 systemd[1]: Reached target swap.target - Swaps. May 17 00:11:37.928753 systemd[1]: Reached target timers.target - Timer Units. May 17 00:11:37.928763 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:11:37.928772 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:11:37.928780 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:11:37.928789 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:11:37.928797 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:11:37.928806 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:11:37.928814 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:11:37.928823 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:11:37.928831 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:11:37.928870 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:11:37.928880 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:11:37.928889 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:11:37.928897 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:11:37.928906 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:11:37.928914 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:11:37.928923 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:11:37.928956 systemd-journald[235]: Collecting audit messages is disabled. May 17 00:11:37.929560 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:11:37.929580 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:11:37.929596 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:11:37.929605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:11:37.929614 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:11:37.929622 kernel: Bridge firewalling registered May 17 00:11:37.929630 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:11:37.929639 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:11:37.929651 systemd-journald[235]: Journal started May 17 00:11:37.929671 systemd-journald[235]: Runtime Journal (/run/log/journal/420bf0a672684d2c9a0fc7643003df41) is 8.0M, max 76.6M, 68.6M free. May 17 00:11:37.904233 systemd-modules-load[237]: Inserted module 'overlay' May 17 00:11:37.933207 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:11:37.925908 systemd-modules-load[237]: Inserted module 'br_netfilter' May 17 00:11:37.934133 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:11:37.943195 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:11:37.946110 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:11:37.957173 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:11:37.958506 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:11:37.966277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:11:37.976189 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:11:37.978277 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:11:37.980076 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:11:37.989050 dracut-cmdline[269]: dracut-dracut-053 May 17 00:11:37.992573 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3554ca41327a0c5ba7e4ac1b3147487d73f35805806dcb20264133a9c301eb5d May 17 00:11:37.995039 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:11:38.022252 systemd-resolved[276]: Positive Trust Anchors: May 17 00:11:38.022267 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:11:38.022298 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:11:38.033111 systemd-resolved[276]: Defaulting to hostname 'linux'. May 17 00:11:38.034972 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:11:38.036328 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:11:38.070010 kernel: SCSI subsystem initialized May 17 00:11:38.075054 kernel: Loading iSCSI transport class v2.0-870. May 17 00:11:38.083029 kernel: iscsi: registered transport (tcp) May 17 00:11:38.097203 kernel: iscsi: registered transport (qla4xxx) May 17 00:11:38.097292 kernel: QLogic iSCSI HBA Driver May 17 00:11:38.150158 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:11:38.156342 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:11:38.182087 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:11:38.182225 kernel: device-mapper: uevent: version 1.0.3 May 17 00:11:38.182259 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:11:38.234046 kernel: raid6: neonx8 gen() 15656 MB/s May 17 00:11:38.251047 kernel: raid6: neonx4 gen() 15520 MB/s May 17 00:11:38.268302 kernel: raid6: neonx2 gen() 13157 MB/s May 17 00:11:38.285036 kernel: raid6: neonx1 gen() 10412 MB/s May 17 00:11:38.302038 kernel: raid6: int64x8 gen() 6887 MB/s May 17 00:11:38.319048 kernel: raid6: int64x4 gen() 7290 MB/s May 17 00:11:38.336026 kernel: raid6: int64x2 gen() 6065 MB/s May 17 00:11:38.353048 kernel: raid6: int64x1 gen() 5020 MB/s May 17 00:11:38.353132 kernel: raid6: using algorithm neonx8 gen() 15656 MB/s May 17 00:11:38.370040 kernel: raid6: .... xor() 11830 MB/s, rmw enabled May 17 00:11:38.370119 kernel: raid6: using neon recovery algorithm May 17 00:11:38.378041 kernel: xor: measuring software checksum speed May 17 00:11:38.378147 kernel: 8regs : 19769 MB/sec May 17 00:11:38.379209 kernel: 32regs : 18544 MB/sec May 17 00:11:38.379259 kernel: arm64_neon : 25981 MB/sec May 17 00:11:38.379270 kernel: xor: using function: arm64_neon (25981 MB/sec) May 17 00:11:38.432035 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:11:38.447433 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:11:38.453399 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:11:38.486489 systemd-udevd[454]: Using default interface naming scheme 'v255'. May 17 00:11:38.490292 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:11:38.501168 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:11:38.517578 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation May 17 00:11:38.559547 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:11:38.566190 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:11:38.615944 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:11:38.622292 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:11:38.644309 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:11:38.645488 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:11:38.646177 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:11:38.648150 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:11:38.657253 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:11:38.674739 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:11:38.728176 kernel: scsi host0: Virtio SCSI HBA May 17 00:11:38.738142 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:11:38.739528 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:11:38.748563 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:11:38.749067 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:11:38.755783 kernel: ACPI: bus type USB registered May 17 00:11:38.755804 kernel: usbcore: registered new interface driver usbfs May 17 00:11:38.755814 kernel: usbcore: registered new interface driver hub May 17 00:11:38.755945 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:11:38.757774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:11:38.758166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:11:38.760689 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:11:38.765008 kernel: usbcore: registered new device driver usb May 17 00:11:38.772427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:11:38.788058 kernel: sr 0:0:0:0: Power-on or device reset occurred May 17 00:11:38.793057 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray May 17 00:11:38.793275 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:11:38.795048 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 May 17 00:11:38.806498 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:11:38.811447 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:11:38.811660 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 17 00:11:38.815143 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 17 00:11:38.816586 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:11:38.816737 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 17 00:11:38.816844 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 17 00:11:38.819019 kernel: sd 0:0:0:1: Power-on or device reset occurred May 17 00:11:38.819230 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 17 00:11:38.819342 kernel: sd 0:0:0:1: [sda] Write Protect is off May 17 00:11:38.819431 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 May 17 00:11:38.819511 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:11:38.817191 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:11:38.826848 kernel: hub 1-0:1.0: USB hub found May 17 00:11:38.827353 kernel: hub 1-0:1.0: 4 ports detected May 17 00:11:38.827459 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:11:38.827582 kernel: hub 2-0:1.0: USB hub found May 17 00:11:38.827674 kernel: hub 2-0:1.0: 4 ports detected May 17 00:11:38.827760 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:11:38.827772 kernel: GPT:17805311 != 80003071 May 17 00:11:38.827781 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:11:38.827790 kernel: GPT:17805311 != 80003071 May 17 00:11:38.827799 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:11:38.827808 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:11:38.829995 kernel: sd 0:0:0:1: [sda] Attached SCSI disk May 17 00:11:38.855219 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:11:38.881630 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (521) May 17 00:11:38.883003 kernel: BTRFS: device fsid 4797bc80-d55e-4b4a-8ede-cb88964b0162 devid 1 transid 43 /dev/sda3 scanned by (udev-worker) (515) May 17 00:11:38.893889 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:11:38.902563 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:11:38.908651 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:11:38.913529 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:11:38.914676 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:11:38.921235 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:11:38.932468 disk-uuid[575]: Primary Header is updated. May 17 00:11:38.932468 disk-uuid[575]: Secondary Entries is updated. May 17 00:11:38.932468 disk-uuid[575]: Secondary Header is updated. May 17 00:11:38.939052 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:11:38.944009 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:11:38.949010 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:11:39.068124 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 17 00:11:39.205317 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 May 17 00:11:39.205407 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 17 00:11:39.205808 kernel: usbcore: registered new interface driver usbhid May 17 00:11:39.206281 kernel: usbhid: USB HID core driver May 17 00:11:39.311088 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd May 17 00:11:39.443033 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 May 17 00:11:39.496033 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 May 17 00:11:39.952221 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:11:39.952622 disk-uuid[576]: The operation has completed successfully. May 17 00:11:40.007608 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:11:40.007716 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:11:40.019225 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:11:40.034460 sh[594]: Success May 17 00:11:40.047030 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 00:11:40.115743 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:11:40.136152 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:11:40.138470 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:11:40.168563 kernel: BTRFS info (device dm-0): first mount of filesystem 4797bc80-d55e-4b4a-8ede-cb88964b0162 May 17 00:11:40.168655 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 17 00:11:40.168684 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:11:40.168713 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:11:40.169081 kernel: BTRFS info (device dm-0): using free space tree May 17 00:11:40.176051 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:11:40.179289 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:11:40.181183 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:11:40.192260 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:11:40.198330 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:11:40.208135 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:11:40.208250 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:11:40.208279 kernel: BTRFS info (device sda6): using free space tree May 17 00:11:40.214002 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:11:40.214057 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:11:40.224648 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:11:40.226492 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:11:40.234316 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:11:40.240200 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:11:40.320084 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:11:40.329227 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:11:40.342881 ignition[687]: Ignition 2.19.0 May 17 00:11:40.343595 ignition[687]: Stage: fetch-offline May 17 00:11:40.344107 ignition[687]: no configs at "/usr/lib/ignition/base.d" May 17 00:11:40.344666 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:11:40.344860 ignition[687]: parsed url from cmdline: "" May 17 00:11:40.344863 ignition[687]: no config URL provided May 17 00:11:40.344869 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:11:40.344878 ignition[687]: no config at "/usr/lib/ignition/user.ign" May 17 00:11:40.344884 ignition[687]: failed to fetch config: resource requires networking May 17 00:11:40.346946 ignition[687]: Ignition finished successfully May 17 00:11:40.351014 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:11:40.353897 systemd-networkd[781]: lo: Link UP May 17 00:11:40.353908 systemd-networkd[781]: lo: Gained carrier May 17 00:11:40.355504 systemd-networkd[781]: Enumeration completed May 17 00:11:40.356277 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:40.356280 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:11:40.357166 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:11:40.357607 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:40.357610 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:11:40.359304 systemd[1]: Reached target network.target - Network. May 17 00:11:40.361007 systemd-networkd[781]: eth0: Link UP May 17 00:11:40.361011 systemd-networkd[781]: eth0: Gained carrier May 17 00:11:40.361019 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:40.370304 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:11:40.373250 systemd-networkd[781]: eth1: Link UP May 17 00:11:40.373257 systemd-networkd[781]: eth1: Gained carrier May 17 00:11:40.373273 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:40.382515 ignition[784]: Ignition 2.19.0 May 17 00:11:40.382525 ignition[784]: Stage: fetch May 17 00:11:40.382700 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 17 00:11:40.382709 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:11:40.382844 ignition[784]: parsed url from cmdline: "" May 17 00:11:40.382848 ignition[784]: no config URL provided May 17 00:11:40.382853 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:11:40.382863 ignition[784]: no config at "/usr/lib/ignition/user.ign" May 17 00:11:40.382883 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 17 00:11:40.383413 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:11:40.406100 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:11:40.419215 systemd-networkd[781]: eth0: DHCPv4 address 91.99.12.209/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:11:40.583625 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 17 00:11:40.588725 ignition[784]: GET result: OK May 17 00:11:40.588896 ignition[784]: parsing config with SHA512: 9b843d6fc42da3c25f2f74e424fcefac28d42ce8d0993ea055dedbeabd08b2ec60f2d8100b8b518c77f9b7f4e5eede93a38989fe2599fe062309b38452cfcd6b May 17 00:11:40.594524 unknown[784]: fetched base config from "system" May 17 00:11:40.594534 unknown[784]: fetched base config from "system" May 17 00:11:40.595110 ignition[784]: fetch: fetch complete May 17 00:11:40.594540 unknown[784]: fetched user config from "hetzner" May 17 00:11:40.595116 ignition[784]: fetch: fetch passed May 17 00:11:40.599285 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:11:40.595173 ignition[784]: Ignition finished successfully May 17 00:11:40.608364 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:11:40.624747 ignition[792]: Ignition 2.19.0 May 17 00:11:40.624765 ignition[792]: Stage: kargs May 17 00:11:40.625070 ignition[792]: no configs at "/usr/lib/ignition/base.d" May 17 00:11:40.625081 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:11:40.626700 ignition[792]: kargs: kargs passed May 17 00:11:40.628100 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:11:40.626780 ignition[792]: Ignition finished successfully May 17 00:11:40.636352 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:11:40.649912 ignition[798]: Ignition 2.19.0 May 17 00:11:40.649919 ignition[798]: Stage: disks May 17 00:11:40.650168 ignition[798]: no configs at "/usr/lib/ignition/base.d" May 17 00:11:40.650178 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:11:40.651156 ignition[798]: disks: disks passed May 17 00:11:40.651211 ignition[798]: Ignition finished successfully May 17 00:11:40.654181 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:11:40.656299 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:11:40.657667 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:11:40.659136 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:11:40.659699 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:11:40.660377 systemd[1]: Reached target basic.target - Basic System. May 17 00:11:40.666186 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:11:40.684217 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 17 00:11:40.688379 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:11:40.696706 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:11:40.747031 kernel: EXT4-fs (sda9): mounted filesystem 50a777b7-c00f-4923-84ce-1c186fc0fd3b r/w with ordered data mode. Quota mode: none. May 17 00:11:40.748270 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:11:40.750203 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:11:40.758129 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:11:40.761099 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:11:40.763607 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:11:40.765403 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:11:40.766686 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:11:40.775003 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (814) May 17 00:11:40.779310 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:11:40.779368 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:11:40.779381 kernel: BTRFS info (device sda6): using free space tree May 17 00:11:40.779087 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:11:40.782604 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:11:40.793157 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:11:40.793227 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:11:40.795605 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:11:40.832610 coreos-metadata[816]: May 17 00:11:40.832 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 17 00:11:40.835205 coreos-metadata[816]: May 17 00:11:40.834 INFO Fetch successful May 17 00:11:40.837102 coreos-metadata[816]: May 17 00:11:40.836 INFO wrote hostname ci-4081-3-3-n-58e6742ed6 to /sysroot/etc/hostname May 17 00:11:40.840673 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:11:40.843625 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:11:40.849768 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory May 17 00:11:40.855115 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:11:40.860326 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:11:40.960555 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:11:40.966116 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:11:40.971046 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:11:40.977009 kernel: BTRFS info (device sda6): last unmount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:11:41.005778 ignition[930]: INFO : Ignition 2.19.0 May 17 00:11:41.005778 ignition[930]: INFO : Stage: mount May 17 00:11:41.005778 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:11:41.005778 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:11:41.005778 ignition[930]: INFO : mount: mount passed May 17 00:11:41.005778 ignition[930]: INFO : Ignition finished successfully May 17 00:11:41.005740 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:11:41.007548 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:11:41.012303 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:11:41.169005 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:11:41.177424 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:11:41.186021 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) May 17 00:11:41.188082 kernel: BTRFS info (device sda6): first mount of filesystem 28a3b64b-9ec4-4fbe-928b-f7ea14288ccf May 17 00:11:41.188152 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm May 17 00:11:41.188178 kernel: BTRFS info (device sda6): using free space tree May 17 00:11:41.191025 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:11:41.191090 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:11:41.194200 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:11:41.222154 ignition[960]: INFO : Ignition 2.19.0 May 17 00:11:41.222154 ignition[960]: INFO : Stage: files May 17 00:11:41.224423 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:11:41.224423 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:11:41.224423 ignition[960]: DEBUG : files: compiled without relabeling support, skipping May 17 00:11:41.226637 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:11:41.226637 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:11:41.230163 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:11:41.230163 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:11:41.230163 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:11:41.230163 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 17 00:11:41.230163 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 17 00:11:41.228681 unknown[960]: wrote ssh authorized keys file for user: core May 17 00:11:41.314208 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:11:41.551355 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 17 00:11:41.551355 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:11:41.553832 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 17 00:11:41.808159 systemd-networkd[781]: eth1: Gained IPv6LL May 17 00:11:42.129632 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:11:42.200889 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:11:42.200889 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:11:42.200889 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:11:42.200889 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:11:42.200889 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:11:42.200889 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:11:42.200889 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:11:42.209499 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:11:42.209499 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:11:42.209499 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:11:42.209499 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:11:42.209499 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:11:42.209499 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:11:42.209499 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:11:42.209499 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 17 00:11:42.320370 systemd-networkd[781]: eth0: Gained IPv6LL May 17 00:11:42.546999 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:11:42.708477 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 17 00:11:42.708477 ignition[960]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 17 00:11:42.711737 ignition[960]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:11:42.711737 ignition[960]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:11:42.711737 ignition[960]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 17 00:11:42.711737 ignition[960]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 17 00:11:42.711737 ignition[960]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:11:42.711737 ignition[960]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:11:42.711737 ignition[960]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 17 00:11:42.711737 ignition[960]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 00:11:42.711737 ignition[960]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:11:42.711737 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:11:42.711737 ignition[960]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:11:42.711737 ignition[960]: INFO : files: files passed May 17 00:11:42.711737 ignition[960]: INFO : Ignition finished successfully May 17 00:11:42.715884 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:11:42.723875 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:11:42.727176 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:11:42.739677 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:11:42.739884 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:11:42.751602 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:11:42.751602 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:11:42.754151 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:11:42.756882 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:11:42.757782 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:11:42.765377 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:11:42.801434 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:11:42.801608 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:11:42.803164 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:11:42.804370 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:11:42.805353 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:11:42.806709 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:11:42.827085 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:11:42.834237 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:11:42.853651 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:11:42.854899 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:11:42.856184 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:11:42.857080 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:11:42.857215 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:11:42.858571 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:11:42.859226 systemd[1]: Stopped target basic.target - Basic System. May 17 00:11:42.860264 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:11:42.861299 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:11:42.862328 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:11:42.863393 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:11:42.864472 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:11:42.865630 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:11:42.866681 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:11:42.867776 systemd[1]: Stopped target swap.target - Swaps. May 17 00:11:42.868609 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:11:42.868732 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:11:42.870033 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:11:42.870659 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:11:42.871644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:11:42.871724 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:11:42.872741 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:11:42.872874 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:11:42.874344 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:11:42.874458 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:11:42.875616 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:11:42.875706 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:11:42.876725 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:11:42.876860 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:11:42.883293 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:11:42.883816 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:11:42.883996 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:11:42.887558 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:11:42.888137 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:11:42.888280 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:11:42.894931 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:11:42.895217 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:11:42.900015 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:11:42.900142 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:11:42.909682 ignition[1012]: INFO : Ignition 2.19.0 May 17 00:11:42.909682 ignition[1012]: INFO : Stage: umount May 17 00:11:42.912078 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:11:42.912078 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:11:42.911574 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:11:42.918915 ignition[1012]: INFO : umount: umount passed May 17 00:11:42.918915 ignition[1012]: INFO : Ignition finished successfully May 17 00:11:42.918621 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:11:42.919299 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:11:42.921433 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:11:42.921494 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:11:42.922915 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:11:42.922992 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:11:42.926149 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:11:42.926211 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:11:42.933453 systemd[1]: Stopped target network.target - Network. May 17 00:11:42.934720 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:11:42.934854 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:11:42.936696 systemd[1]: Stopped target paths.target - Path Units. May 17 00:11:42.938409 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:11:42.944062 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:11:42.949466 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:11:42.951319 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:11:42.953899 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:11:42.954247 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:11:42.955224 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:11:42.955282 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:11:42.956442 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:11:42.956497 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:11:42.957907 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:11:42.957954 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:11:42.958956 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:11:42.960120 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:11:42.966501 systemd-networkd[781]: eth0: DHCPv6 lease lost May 17 00:11:42.968278 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:11:42.968439 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:11:42.969580 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:11:42.969633 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:11:42.970345 systemd-networkd[781]: eth1: DHCPv6 lease lost May 17 00:11:42.972477 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:11:42.972617 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:11:42.974678 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:11:42.974822 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:11:42.977684 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:11:42.977741 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:11:42.983156 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:11:42.983973 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:11:42.984099 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:11:42.986421 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:11:42.986493 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:11:42.987273 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:11:42.987315 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:11:42.988730 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:11:42.988825 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:11:42.990484 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:11:43.007048 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:11:43.007264 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:11:43.008520 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:11:43.008567 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:11:43.009693 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:11:43.009734 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:11:43.011688 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:11:43.011953 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:11:43.013354 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:11:43.013413 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:11:43.014946 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:11:43.015014 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:11:43.024391 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:11:43.025685 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:11:43.025837 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:11:43.027504 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:11:43.027590 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:11:43.029190 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:11:43.030049 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:11:43.036317 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:11:43.036438 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:11:43.037893 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:11:43.045292 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:11:43.054075 systemd[1]: Switching root. May 17 00:11:43.086738 systemd-journald[235]: Journal stopped May 17 00:11:44.067951 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). May 17 00:11:44.070960 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:11:44.071006 kernel: SELinux: policy capability open_perms=1 May 17 00:11:44.071023 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:11:44.071035 kernel: SELinux: policy capability always_check_network=0 May 17 00:11:44.071047 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:11:44.071059 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:11:44.071070 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:11:44.071081 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:11:44.071094 systemd[1]: Successfully loaded SELinux policy in 34.936ms. May 17 00:11:44.071122 kernel: audit: type=1403 audit(1747440703.233:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:11:44.071139 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.638ms. May 17 00:11:44.071154 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:11:44.071168 systemd[1]: Detected virtualization kvm. May 17 00:11:44.071181 systemd[1]: Detected architecture arm64. May 17 00:11:44.071194 systemd[1]: Detected first boot. May 17 00:11:44.071206 systemd[1]: Hostname set to . May 17 00:11:44.071218 systemd[1]: Initializing machine ID from VM UUID. May 17 00:11:44.071231 zram_generator::config[1056]: No configuration found. May 17 00:11:44.071250 systemd[1]: Populated /etc with preset unit settings. May 17 00:11:44.071263 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:11:44.071275 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:11:44.071292 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:11:44.071310 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:11:44.071323 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:11:44.071335 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:11:44.071347 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:11:44.071360 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:11:44.071374 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:11:44.071387 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:11:44.071399 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:11:44.071411 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:11:44.071424 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:11:44.071436 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:11:44.071449 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:11:44.072110 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:11:44.072146 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:11:44.072166 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 17 00:11:44.072184 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:11:44.072196 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:11:44.072207 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:11:44.072220 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:11:44.072231 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:11:44.072251 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:11:44.072264 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:11:44.072277 systemd[1]: Reached target slices.target - Slice Units. May 17 00:11:44.072289 systemd[1]: Reached target swap.target - Swaps. May 17 00:11:44.072301 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:11:44.072315 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:11:44.072327 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:11:44.072340 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:11:44.072352 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:11:44.072365 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:11:44.072380 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:11:44.072392 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:11:44.072404 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:11:44.072416 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:11:44.072439 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:11:44.072458 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:11:44.072472 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:11:44.072485 systemd[1]: Reached target machines.target - Containers. May 17 00:11:44.072498 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:11:44.072511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:11:44.072523 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:11:44.072535 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:11:44.072545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:11:44.072561 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:11:44.072573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:11:44.072586 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:11:44.072598 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:11:44.072611 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:11:44.072623 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:11:44.072636 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:11:44.072648 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:11:44.072662 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:11:44.072675 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:11:44.072686 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:11:44.072697 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:11:44.072707 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:11:44.072718 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:11:44.072728 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:11:44.072750 systemd[1]: Stopped verity-setup.service. May 17 00:11:44.072761 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:11:44.072772 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:11:44.072784 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:11:44.072796 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:11:44.072807 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:11:44.072817 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:11:44.072830 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:11:44.072843 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:11:44.072854 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:11:44.072864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:11:44.072874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:11:44.072885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:11:44.072895 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:11:44.072905 kernel: fuse: init (API version 7.39) May 17 00:11:44.072959 systemd-journald[1123]: Collecting audit messages is disabled. May 17 00:11:44.073010 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:11:44.073025 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:11:44.073039 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:11:44.073052 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:11:44.073064 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:11:44.073076 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:11:44.073086 kernel: loop: module loaded May 17 00:11:44.073097 systemd-journald[1123]: Journal started May 17 00:11:44.073125 systemd-journald[1123]: Runtime Journal (/run/log/journal/420bf0a672684d2c9a0fc7643003df41) is 8.0M, max 76.6M, 68.6M free. May 17 00:11:44.079506 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:11:43.766608 systemd[1]: Queued start job for default target multi-user.target. May 17 00:11:43.790649 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:11:43.791247 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:11:44.094242 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:11:44.094307 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:11:44.094331 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:11:44.101002 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:11:44.109999 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:11:44.110073 kernel: ACPI: bus type drm_connector registered May 17 00:11:44.116412 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:11:44.119005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:11:44.124005 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:11:44.126093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:11:44.137452 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:11:44.141067 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:11:44.150060 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:11:44.150142 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:11:44.151622 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:11:44.152658 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:11:44.152841 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:11:44.153882 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:11:44.154048 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:11:44.155197 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:11:44.156032 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:11:44.159302 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:11:44.173625 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:11:44.197458 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:11:44.205291 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:11:44.208814 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:11:44.211192 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:11:44.215591 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:11:44.218676 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:11:44.224191 kernel: loop0: detected capacity change from 0 to 114432 May 17 00:11:44.238383 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:11:44.258006 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:11:44.265324 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:11:44.276528 systemd-journald[1123]: Time spent on flushing to /var/log/journal/420bf0a672684d2c9a0fc7643003df41 is 49.914ms for 1140 entries. May 17 00:11:44.276528 systemd-journald[1123]: System Journal (/var/log/journal/420bf0a672684d2c9a0fc7643003df41) is 8.0M, max 584.8M, 576.8M free. May 17 00:11:44.341165 systemd-journald[1123]: Received client request to flush runtime journal. May 17 00:11:44.341228 kernel: loop1: detected capacity change from 0 to 8 May 17 00:11:44.341245 kernel: loop2: detected capacity change from 0 to 211168 May 17 00:11:44.299349 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:11:44.310242 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:11:44.314024 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:11:44.331756 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:11:44.341856 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:11:44.352706 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:11:44.384009 kernel: loop3: detected capacity change from 0 to 114328 May 17 00:11:44.395272 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. May 17 00:11:44.395292 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. May 17 00:11:44.405548 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:11:44.428012 kernel: loop4: detected capacity change from 0 to 114432 May 17 00:11:44.443341 kernel: loop5: detected capacity change from 0 to 8 May 17 00:11:44.447010 kernel: loop6: detected capacity change from 0 to 211168 May 17 00:11:44.476083 kernel: loop7: detected capacity change from 0 to 114328 May 17 00:11:44.494114 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 17 00:11:44.494549 (sd-merge)[1195]: Merged extensions into '/usr'. May 17 00:11:44.500934 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:11:44.501068 systemd[1]: Reloading... May 17 00:11:44.604001 zram_generator::config[1218]: No configuration found. May 17 00:11:44.693006 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:11:44.776506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:11:44.826894 systemd[1]: Reloading finished in 325 ms. May 17 00:11:44.878011 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:11:44.879285 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:11:44.890254 systemd[1]: Starting ensure-sysext.service... May 17 00:11:44.895332 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:11:44.911706 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... May 17 00:11:44.911738 systemd[1]: Reloading... May 17 00:11:44.939466 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:11:44.939827 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:11:44.940613 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:11:44.941205 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 17 00:11:44.941261 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. May 17 00:11:44.945797 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:11:44.945811 systemd-tmpfiles[1259]: Skipping /boot May 17 00:11:44.954347 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:11:44.954365 systemd-tmpfiles[1259]: Skipping /boot May 17 00:11:44.987018 zram_generator::config[1285]: No configuration found. May 17 00:11:45.091107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:11:45.140248 systemd[1]: Reloading finished in 227 ms. May 17 00:11:45.160219 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:11:45.166885 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:11:45.181439 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:11:45.190473 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:11:45.195225 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:11:45.200332 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:11:45.205329 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:11:45.210915 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:11:45.214750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:11:45.227853 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:11:45.231245 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:11:45.234227 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:11:45.237191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:11:45.240317 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:11:45.240477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:11:45.244738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:11:45.254257 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:11:45.255118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:11:45.266309 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:11:45.270018 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:11:45.273264 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:11:45.273411 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:11:45.285348 systemd-udevd[1333]: Using default interface naming scheme 'v255'. May 17 00:11:45.288867 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:11:45.292605 systemd[1]: Finished ensure-sysext.service. May 17 00:11:45.313863 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:11:45.316644 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:11:45.319984 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:11:45.322231 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:11:45.322399 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:11:45.323369 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:11:45.325051 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:11:45.326214 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:11:45.326367 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:11:45.336665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:11:45.336794 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:11:45.336830 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:11:45.341209 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:11:45.357193 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:11:45.361854 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:11:45.382910 augenrules[1379]: No rules May 17 00:11:45.385876 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:11:45.403061 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:11:45.431140 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 17 00:11:45.579130 systemd-networkd[1368]: lo: Link UP May 17 00:11:45.579139 systemd-networkd[1368]: lo: Gained carrier May 17 00:11:45.582578 systemd-networkd[1368]: Enumeration completed May 17 00:11:45.582832 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:11:45.587209 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:45.587327 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:11:45.592484 systemd-networkd[1368]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:45.593163 systemd-networkd[1368]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:11:45.596690 systemd-networkd[1368]: eth0: Link UP May 17 00:11:45.596698 systemd-networkd[1368]: eth0: Gained carrier May 17 00:11:45.596754 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:45.600263 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:11:45.601044 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:11:45.601719 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:11:45.602805 systemd-networkd[1368]: eth1: Link UP May 17 00:11:45.604037 systemd-networkd[1368]: eth1: Gained carrier May 17 00:11:45.604073 systemd-networkd[1368]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:45.615605 systemd-resolved[1331]: Positive Trust Anchors: May 17 00:11:45.615628 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:11:45.615660 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:11:45.623698 systemd-resolved[1331]: Using system hostname 'ci-4081-3-3-n-58e6742ed6'. May 17 00:11:45.627305 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:11:45.629198 systemd[1]: Reached target network.target - Network. May 17 00:11:45.629849 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:11:45.638615 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:11:45.655011 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:11:45.670105 systemd-networkd[1368]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:11:45.673467 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:11:45.679774 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 17 00:11:45.680338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:11:45.689222 systemd-networkd[1368]: eth0: DHCPv4 address 91.99.12.209/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:11:45.689646 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:11:45.690069 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:11:45.692283 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:11:45.693176 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:11:45.698683 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:11:45.699962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:11:45.700023 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:11:45.706530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:11:45.706739 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:11:45.714372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:11:45.714604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:11:45.715619 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:11:45.730360 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1371) May 17 00:11:45.734664 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:11:45.735304 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:11:45.758436 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:11:45.770078 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 May 17 00:11:45.776869 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:11:45.776935 kernel: [drm] features: -context_init May 17 00:11:45.783009 kernel: [drm] number of scanouts: 1 May 17 00:11:45.783107 kernel: [drm] number of cap sets: 0 May 17 00:11:45.794002 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 17 00:11:45.795523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:11:45.801961 kernel: Console: switching to colour frame buffer device 160x50 May 17 00:11:45.807020 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:11:45.820651 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:11:45.830334 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:11:45.835247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:11:45.837046 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:11:45.845327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:11:45.851789 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:11:45.911762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:11:45.935652 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:11:45.945394 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:11:45.961015 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:11:45.988072 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:11:45.989812 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:11:45.990526 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:11:45.991284 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:11:45.992194 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:11:45.993134 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:11:45.993814 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:11:45.994636 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:11:45.995330 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:11:45.995366 systemd[1]: Reached target paths.target - Path Units. May 17 00:11:45.995838 systemd[1]: Reached target timers.target - Timer Units. May 17 00:11:45.997668 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:11:45.999874 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:11:46.012105 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:11:46.015129 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:11:46.016927 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:11:46.018109 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:11:46.018912 systemd[1]: Reached target basic.target - Basic System. May 17 00:11:46.019802 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:11:46.019842 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:11:46.032222 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:11:46.038284 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:11:46.040517 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:11:46.042432 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:11:46.047368 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:11:46.049562 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:11:46.052187 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:11:46.053517 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:11:46.055234 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:11:46.059237 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 17 00:11:46.064337 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:11:46.067244 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:11:46.070633 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:11:46.072948 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:11:46.073526 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:11:46.076268 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:11:46.083114 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:11:46.111291 jq[1449]: false May 17 00:11:46.126097 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:11:46.127730 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:11:46.128353 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:11:46.142685 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:11:46.142923 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:11:46.162944 jq[1460]: true May 17 00:11:46.167356 tar[1465]: linux-arm64/LICENSE May 17 00:11:46.167356 tar[1465]: linux-arm64/helm May 17 00:11:46.171638 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:11:46.182101 dbus-daemon[1448]: [system] SELinux support is enabled May 17 00:11:46.188892 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:11:46.193183 coreos-metadata[1447]: May 17 00:11:46.192 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 17 00:11:46.195713 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:11:46.195813 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:11:46.197406 coreos-metadata[1447]: May 17 00:11:46.197 INFO Fetch successful May 17 00:11:46.197602 coreos-metadata[1447]: May 17 00:11:46.197 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 17 00:11:46.197748 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:11:46.197786 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:11:46.199588 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:11:46.202017 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:11:46.203234 coreos-metadata[1447]: May 17 00:11:46.202 INFO Fetch successful May 17 00:11:46.209669 extend-filesystems[1450]: Found loop4 May 17 00:11:46.211876 jq[1481]: true May 17 00:11:46.214577 extend-filesystems[1450]: Found loop5 May 17 00:11:46.214577 extend-filesystems[1450]: Found loop6 May 17 00:11:46.214577 extend-filesystems[1450]: Found loop7 May 17 00:11:46.214577 extend-filesystems[1450]: Found sda May 17 00:11:46.214577 extend-filesystems[1450]: Found sda1 May 17 00:11:46.214577 extend-filesystems[1450]: Found sda2 May 17 00:11:46.214577 extend-filesystems[1450]: Found sda3 May 17 00:11:46.214577 extend-filesystems[1450]: Found usr May 17 00:11:46.214577 extend-filesystems[1450]: Found sda4 May 17 00:11:46.214577 extend-filesystems[1450]: Found sda6 May 17 00:11:46.214577 extend-filesystems[1450]: Found sda7 May 17 00:11:46.214577 extend-filesystems[1450]: Found sda9 May 17 00:11:46.214577 extend-filesystems[1450]: Checking size of /dev/sda9 May 17 00:11:46.228049 update_engine[1459]: I20250517 00:11:46.223098 1459 main.cc:92] Flatcar Update Engine starting May 17 00:11:46.229567 systemd[1]: Started update-engine.service - Update Engine. May 17 00:11:46.232254 update_engine[1459]: I20250517 00:11:46.231884 1459 update_check_scheduler.cc:74] Next update check in 7m4s May 17 00:11:46.235242 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:11:46.284052 extend-filesystems[1450]: Resized partition /dev/sda9 May 17 00:11:46.295729 extend-filesystems[1501]: resize2fs 1.47.1 (20-May-2024) May 17 00:11:46.292157 systemd-logind[1458]: New seat seat0. May 17 00:11:46.293273 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (Power Button) May 17 00:11:46.293290 systemd-logind[1458]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) May 17 00:11:46.293559 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:11:46.310005 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 17 00:11:46.392402 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:11:46.397955 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:11:46.445046 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1362) May 17 00:11:46.446853 bash[1518]: Updated "/home/core/.ssh/authorized_keys" May 17 00:11:46.449029 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:11:46.468481 systemd[1]: Starting sshkeys.service... May 17 00:11:46.513274 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:11:46.527508 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:11:46.538756 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 17 00:11:46.559271 containerd[1473]: time="2025-05-17T00:11:46.558593120Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:11:46.561307 extend-filesystems[1501]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:11:46.561307 extend-filesystems[1501]: old_desc_blocks = 1, new_desc_blocks = 5 May 17 00:11:46.561307 extend-filesystems[1501]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 17 00:11:46.571190 extend-filesystems[1450]: Resized filesystem in /dev/sda9 May 17 00:11:46.571190 extend-filesystems[1450]: Found sr0 May 17 00:11:46.563824 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:11:46.564684 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:11:46.573161 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:11:46.609033 coreos-metadata[1530]: May 17 00:11:46.608 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 17 00:11:46.609033 coreos-metadata[1530]: May 17 00:11:46.609 INFO Fetch successful May 17 00:11:46.612268 unknown[1530]: wrote ssh authorized keys file for user: core May 17 00:11:46.629878 containerd[1473]: time="2025-05-17T00:11:46.628786560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:11:46.633605 containerd[1473]: time="2025-05-17T00:11:46.632278800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:11:46.633605 containerd[1473]: time="2025-05-17T00:11:46.632325960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:11:46.633605 containerd[1473]: time="2025-05-17T00:11:46.632344720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:11:46.633605 containerd[1473]: time="2025-05-17T00:11:46.632510360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:11:46.633605 containerd[1473]: time="2025-05-17T00:11:46.632526720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:11:46.633605 containerd[1473]: time="2025-05-17T00:11:46.632589640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:11:46.633605 containerd[1473]: time="2025-05-17T00:11:46.632602040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:11:46.633605 containerd[1473]: time="2025-05-17T00:11:46.632809960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:11:46.633605 containerd[1473]: time="2025-05-17T00:11:46.632829880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:11:46.633605 containerd[1473]: time="2025-05-17T00:11:46.632847440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:11:46.633605 containerd[1473]: time="2025-05-17T00:11:46.632857440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:11:46.633932 containerd[1473]: time="2025-05-17T00:11:46.632929840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:11:46.635199 containerd[1473]: time="2025-05-17T00:11:46.635163480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:11:46.635414 containerd[1473]: time="2025-05-17T00:11:46.635301200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:11:46.635414 containerd[1473]: time="2025-05-17T00:11:46.635323560Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:11:46.635414 containerd[1473]: time="2025-05-17T00:11:46.635407680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:11:46.635544 containerd[1473]: time="2025-05-17T00:11:46.635447480Z" level=info msg="metadata content store policy set" policy=shared May 17 00:11:46.642179 containerd[1473]: time="2025-05-17T00:11:46.641482960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:11:46.642179 containerd[1473]: time="2025-05-17T00:11:46.641616640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:11:46.642179 containerd[1473]: time="2025-05-17T00:11:46.641636240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:11:46.642179 containerd[1473]: time="2025-05-17T00:11:46.641652120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:11:46.642179 containerd[1473]: time="2025-05-17T00:11:46.641666320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:11:46.642179 containerd[1473]: time="2025-05-17T00:11:46.641915680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643273920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643445320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643464560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643478360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643503480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643519600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643534280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643549800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643565560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643587240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643600840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643618920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643657520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:11:46.644698 containerd[1473]: time="2025-05-17T00:11:46.643678720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:11:46.645133 containerd[1473]: time="2025-05-17T00:11:46.643704640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:11:46.645133 containerd[1473]: time="2025-05-17T00:11:46.643720720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:11:46.645133 containerd[1473]: time="2025-05-17T00:11:46.643742480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:11:46.645133 containerd[1473]: time="2025-05-17T00:11:46.643755600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:11:46.645133 containerd[1473]: time="2025-05-17T00:11:46.643769440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:11:46.645133 containerd[1473]: time="2025-05-17T00:11:46.643783520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:11:46.646412 containerd[1473]: time="2025-05-17T00:11:46.643797920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:11:46.646412 containerd[1473]: time="2025-05-17T00:11:46.646019160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:11:46.646412 containerd[1473]: time="2025-05-17T00:11:46.646036760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:11:46.646412 containerd[1473]: time="2025-05-17T00:11:46.646052920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:11:46.646412 containerd[1473]: time="2025-05-17T00:11:46.646076960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:11:46.646412 containerd[1473]: time="2025-05-17T00:11:46.646103080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:11:46.646412 containerd[1473]: time="2025-05-17T00:11:46.646131560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:11:46.646412 containerd[1473]: time="2025-05-17T00:11:46.646156520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:11:46.646412 containerd[1473]: time="2025-05-17T00:11:46.646173440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:11:46.646412 containerd[1473]: time="2025-05-17T00:11:46.646310560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:11:46.646664 containerd[1473]: time="2025-05-17T00:11:46.646330080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:11:46.646664 containerd[1473]: time="2025-05-17T00:11:46.646489200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:11:46.646664 containerd[1473]: time="2025-05-17T00:11:46.646503720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:11:46.646664 containerd[1473]: time="2025-05-17T00:11:46.646514440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:11:46.646664 containerd[1473]: time="2025-05-17T00:11:46.646528320Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:11:46.646664 containerd[1473]: time="2025-05-17T00:11:46.646541400Z" level=info msg="NRI interface is disabled by configuration." May 17 00:11:46.646664 containerd[1473]: time="2025-05-17T00:11:46.646568120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:11:46.647135 containerd[1473]: time="2025-05-17T00:11:46.647037760Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:11:46.647135 containerd[1473]: time="2025-05-17T00:11:46.647117360Z" level=info msg="Connect containerd service" May 17 00:11:46.647294 containerd[1473]: time="2025-05-17T00:11:46.647158360Z" level=info msg="using legacy CRI server" May 17 00:11:46.647294 containerd[1473]: time="2025-05-17T00:11:46.647165800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:11:46.647330 containerd[1473]: time="2025-05-17T00:11:46.647291360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:11:46.650848 containerd[1473]: time="2025-05-17T00:11:46.650242960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:11:46.650957 containerd[1473]: time="2025-05-17T00:11:46.650914320Z" level=info msg="Start subscribing containerd event" May 17 00:11:46.651320 containerd[1473]: time="2025-05-17T00:11:46.651001480Z" level=info msg="Start recovering state" May 17 00:11:46.651320 containerd[1473]: time="2025-05-17T00:11:46.651088360Z" level=info msg="Start event monitor" May 17 00:11:46.651320 containerd[1473]: time="2025-05-17T00:11:46.651100320Z" level=info msg="Start snapshots syncer" May 17 00:11:46.651320 containerd[1473]: time="2025-05-17T00:11:46.651109480Z" level=info msg="Start cni network conf syncer for default" May 17 00:11:46.651320 containerd[1473]: time="2025-05-17T00:11:46.651117640Z" level=info msg="Start streaming server" May 17 00:11:46.655025 containerd[1473]: time="2025-05-17T00:11:46.651885640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:11:46.655025 containerd[1473]: time="2025-05-17T00:11:46.651940840Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:11:46.655025 containerd[1473]: time="2025-05-17T00:11:46.652210920Z" level=info msg="containerd successfully booted in 0.109194s" May 17 00:11:46.652117 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:11:46.662569 update-ssh-keys[1538]: Updated "/home/core/.ssh/authorized_keys" May 17 00:11:46.666225 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:11:46.671852 systemd[1]: Finished sshkeys.service. May 17 00:11:46.864202 systemd-networkd[1368]: eth1: Gained IPv6LL May 17 00:11:46.865122 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:11:46.873184 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:11:46.874732 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:11:46.883289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:11:46.893752 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:11:46.951138 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:11:47.078918 tar[1465]: linux-arm64/README.md May 17 00:11:47.098647 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:11:47.441113 systemd-networkd[1368]: eth0: Gained IPv6LL May 17 00:11:47.441551 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. May 17 00:11:47.708881 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:11:47.735343 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:11:47.741151 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:11:47.747912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:11:47.752632 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:11:47.759897 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:11:47.760193 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:11:47.767463 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:11:47.780025 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:11:47.788167 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:11:47.794761 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 17 00:11:47.795618 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:11:47.796868 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:11:47.798223 systemd[1]: Startup finished in 812ms (kernel) + 5.537s (initrd) + 4.599s (userspace) = 10.949s. May 17 00:11:48.290604 kubelet[1570]: E0517 00:11:48.290466 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:11:48.293308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:11:48.293464 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:11:58.543751 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:11:58.549417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:11:58.677285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:11:58.690505 (kubelet)[1597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:11:58.742082 kubelet[1597]: E0517 00:11:58.741934 1597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:11:58.746911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:11:58.747254 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:12:08.821588 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:12:08.833493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:08.982337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:08.984389 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:12:09.036382 kubelet[1612]: E0517 00:12:09.036317 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:12:09.039279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:12:09.039601 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:12:17.703909 systemd-timesyncd[1352]: Contacted time server 217.91.44.17:123 (2.flatcar.pool.ntp.org). May 17 00:12:17.704076 systemd-timesyncd[1352]: Initial clock synchronization to Sat 2025-05-17 00:12:17.925020 UTC. May 17 00:12:19.071791 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:12:19.082266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:19.244272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:19.253150 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:12:19.304327 kubelet[1627]: E0517 00:12:19.304123 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:12:19.307014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:12:19.307188 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:12:29.321818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:12:29.329429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:29.450318 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:29.452698 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:12:29.500583 kubelet[1642]: E0517 00:12:29.500528 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:12:29.502892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:12:29.503215 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:12:31.896090 update_engine[1459]: I20250517 00:12:31.895311 1459 update_attempter.cc:509] Updating boot flags... May 17 00:12:31.936062 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 43 scanned by (udev-worker) (1658) May 17 00:12:39.571312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:12:39.584356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:39.719712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:39.725131 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:12:39.767429 kubelet[1672]: E0517 00:12:39.767354 1672 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:12:39.770541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:12:39.770699 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:12:49.821407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:12:49.835340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:12:49.970926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:12:49.976448 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:12:50.028632 kubelet[1687]: E0517 00:12:50.028541 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:12:50.032377 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:12:50.032697 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:00.071395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 17 00:13:00.082361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:00.200408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:00.208465 (kubelet)[1701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:00.247320 kubelet[1701]: E0517 00:13:00.247234 1701 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:00.250965 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:00.251188 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:10.321113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 17 00:13:10.328349 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:10.460699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:10.465811 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:10.512578 kubelet[1717]: E0517 00:13:10.512495 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:10.516243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:10.516469 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:20.571649 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 17 00:13:20.582408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:20.722314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:20.734560 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:20.783297 kubelet[1732]: E0517 00:13:20.783201 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:20.786453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:20.786610 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:26.470997 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:13:26.483536 systemd[1]: Started sshd@0-91.99.12.209:22-139.178.68.195:49844.service - OpenSSH per-connection server daemon (139.178.68.195:49844). May 17 00:13:27.468156 sshd[1740]: Accepted publickey for core from 139.178.68.195 port 49844 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:27.471779 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:27.482516 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:13:27.498016 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:13:27.502931 systemd-logind[1458]: New session 1 of user core. May 17 00:13:27.515940 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:13:27.523808 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:13:27.542021 (systemd)[1744]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:13:27.655723 systemd[1744]: Queued start job for default target default.target. May 17 00:13:27.669811 systemd[1744]: Created slice app.slice - User Application Slice. May 17 00:13:27.670052 systemd[1744]: Reached target paths.target - Paths. May 17 00:13:27.670161 systemd[1744]: Reached target timers.target - Timers. May 17 00:13:27.671850 systemd[1744]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:13:27.684909 systemd[1744]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:13:27.685071 systemd[1744]: Reached target sockets.target - Sockets. May 17 00:13:27.685085 systemd[1744]: Reached target basic.target - Basic System. May 17 00:13:27.685137 systemd[1744]: Reached target default.target - Main User Target. May 17 00:13:27.685163 systemd[1744]: Startup finished in 135ms. May 17 00:13:27.685549 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:13:27.695518 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:13:28.392340 systemd[1]: Started sshd@1-91.99.12.209:22-139.178.68.195:49852.service - OpenSSH per-connection server daemon (139.178.68.195:49852). May 17 00:13:29.371091 sshd[1755]: Accepted publickey for core from 139.178.68.195 port 49852 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:29.373151 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:29.377749 systemd-logind[1458]: New session 2 of user core. May 17 00:13:29.386278 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:13:30.049167 sshd[1755]: pam_unix(sshd:session): session closed for user core May 17 00:13:30.053910 systemd[1]: sshd@1-91.99.12.209:22-139.178.68.195:49852.service: Deactivated successfully. May 17 00:13:30.056138 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:13:30.059636 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. May 17 00:13:30.060643 systemd-logind[1458]: Removed session 2. May 17 00:13:30.227539 systemd[1]: Started sshd@2-91.99.12.209:22-139.178.68.195:49868.service - OpenSSH per-connection server daemon (139.178.68.195:49868). May 17 00:13:30.821108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 17 00:13:30.827240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:30.949209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:30.961927 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:31.006644 kubelet[1772]: E0517 00:13:31.006548 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:31.009257 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:31.009423 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:31.205491 sshd[1762]: Accepted publickey for core from 139.178.68.195 port 49868 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:31.208419 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:31.216698 systemd-logind[1458]: New session 3 of user core. May 17 00:13:31.226316 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:13:31.881107 sshd[1762]: pam_unix(sshd:session): session closed for user core May 17 00:13:31.886791 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. May 17 00:13:31.887194 systemd[1]: sshd@2-91.99.12.209:22-139.178.68.195:49868.service: Deactivated successfully. May 17 00:13:31.890953 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:13:31.892936 systemd-logind[1458]: Removed session 3. May 17 00:13:32.063431 systemd[1]: Started sshd@3-91.99.12.209:22-139.178.68.195:49884.service - OpenSSH per-connection server daemon (139.178.68.195:49884). May 17 00:13:33.054304 sshd[1784]: Accepted publickey for core from 139.178.68.195 port 49884 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:33.056669 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:33.062016 systemd-logind[1458]: New session 4 of user core. May 17 00:13:33.072307 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:13:33.746249 sshd[1784]: pam_unix(sshd:session): session closed for user core May 17 00:13:33.751131 systemd[1]: sshd@3-91.99.12.209:22-139.178.68.195:49884.service: Deactivated successfully. May 17 00:13:33.754798 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:13:33.755875 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. May 17 00:13:33.757110 systemd-logind[1458]: Removed session 4. May 17 00:13:33.927462 systemd[1]: Started sshd@4-91.99.12.209:22-139.178.68.195:45016.service - OpenSSH per-connection server daemon (139.178.68.195:45016). May 17 00:13:34.914543 sshd[1791]: Accepted publickey for core from 139.178.68.195 port 45016 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:34.916621 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:34.922051 systemd-logind[1458]: New session 5 of user core. May 17 00:13:34.931737 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:13:35.454641 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:13:35.455071 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:35.473756 sudo[1794]: pam_unix(sudo:session): session closed for user root May 17 00:13:35.636460 sshd[1791]: pam_unix(sshd:session): session closed for user core May 17 00:13:35.641533 systemd[1]: sshd@4-91.99.12.209:22-139.178.68.195:45016.service: Deactivated successfully. May 17 00:13:35.644298 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:13:35.645401 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. May 17 00:13:35.646449 systemd-logind[1458]: Removed session 5. May 17 00:13:35.814682 systemd[1]: Started sshd@5-91.99.12.209:22-139.178.68.195:45030.service - OpenSSH per-connection server daemon (139.178.68.195:45030). May 17 00:13:36.816719 sshd[1799]: Accepted publickey for core from 139.178.68.195 port 45030 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:36.819527 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:36.826190 systemd-logind[1458]: New session 6 of user core. May 17 00:13:36.836328 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:13:37.356306 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:13:37.357063 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:37.362072 sudo[1803]: pam_unix(sudo:session): session closed for user root May 17 00:13:37.367941 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:13:37.368315 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:37.393588 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:13:37.397020 auditctl[1806]: No rules May 17 00:13:37.396869 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:13:37.397059 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:13:37.405630 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:13:37.430447 augenrules[1824]: No rules May 17 00:13:37.433070 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:13:37.434787 sudo[1802]: pam_unix(sudo:session): session closed for user root May 17 00:13:37.598827 sshd[1799]: pam_unix(sshd:session): session closed for user core May 17 00:13:37.604800 systemd[1]: sshd@5-91.99.12.209:22-139.178.68.195:45030.service: Deactivated successfully. May 17 00:13:37.608254 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:13:37.610487 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. May 17 00:13:37.611595 systemd-logind[1458]: Removed session 6. May 17 00:13:37.770771 systemd[1]: Started sshd@6-91.99.12.209:22-139.178.68.195:45044.service - OpenSSH per-connection server daemon (139.178.68.195:45044). May 17 00:13:38.768759 sshd[1832]: Accepted publickey for core from 139.178.68.195 port 45044 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:13:38.770853 sshd[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:13:38.776249 systemd-logind[1458]: New session 7 of user core. May 17 00:13:38.783321 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:13:39.294570 sudo[1835]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:13:39.294852 sudo[1835]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:13:39.602716 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:13:39.603615 (dockerd)[1850]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:13:39.866131 dockerd[1850]: time="2025-05-17T00:13:39.865902558Z" level=info msg="Starting up" May 17 00:13:39.953787 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport876086122-merged.mount: Deactivated successfully. May 17 00:13:39.982669 dockerd[1850]: time="2025-05-17T00:13:39.982612970Z" level=info msg="Loading containers: start." May 17 00:13:40.112044 kernel: Initializing XFRM netlink socket May 17 00:13:40.211654 systemd-networkd[1368]: docker0: Link UP May 17 00:13:40.244136 dockerd[1850]: time="2025-05-17T00:13:40.243272144Z" level=info msg="Loading containers: done." May 17 00:13:40.262160 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3521780501-merged.mount: Deactivated successfully. May 17 00:13:40.268292 dockerd[1850]: time="2025-05-17T00:13:40.268183318Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:13:40.268450 dockerd[1850]: time="2025-05-17T00:13:40.268380489Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:13:40.268642 dockerd[1850]: time="2025-05-17T00:13:40.268592901Z" level=info msg="Daemon has completed initialization" May 17 00:13:40.319264 dockerd[1850]: time="2025-05-17T00:13:40.318612421Z" level=info msg="API listen on /run/docker.sock" May 17 00:13:40.319508 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:13:41.071141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 17 00:13:41.079413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:41.093452 containerd[1473]: time="2025-05-17T00:13:41.093065653Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 17 00:13:41.228308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:41.234462 (kubelet)[1998]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:41.284931 kubelet[1998]: E0517 00:13:41.284869 1998 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:41.288356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:41.288740 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:41.758519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170836470.mount: Deactivated successfully. May 17 00:13:44.007771 containerd[1473]: time="2025-05-17T00:13:44.007679456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:44.009911 containerd[1473]: time="2025-05-17T00:13:44.009565797Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=27349442" May 17 00:13:44.011324 containerd[1473]: time="2025-05-17T00:13:44.011275009Z" level=info msg="ImageCreate event name:\"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:44.015288 containerd[1473]: time="2025-05-17T00:13:44.015238902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:44.016908 containerd[1473]: time="2025-05-17T00:13:44.016856389Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"27346150\" in 2.923744453s" May 17 00:13:44.017105 containerd[1473]: time="2025-05-17T00:13:44.017086321Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 17 00:13:44.018988 containerd[1473]: time="2025-05-17T00:13:44.018939500Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 17 00:13:46.535134 containerd[1473]: time="2025-05-17T00:13:46.535064744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:46.537419 containerd[1473]: time="2025-05-17T00:13:46.537372385Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=23531755" May 17 00:13:46.538564 containerd[1473]: time="2025-05-17T00:13:46.538512085Z" level=info msg="ImageCreate event name:\"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:46.543168 containerd[1473]: time="2025-05-17T00:13:46.543105245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:46.544287 containerd[1473]: time="2025-05-17T00:13:46.543679515Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"25086427\" in 2.524661451s" May 17 00:13:46.544287 containerd[1473]: time="2025-05-17T00:13:46.543724198Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 17 00:13:46.544492 containerd[1473]: time="2025-05-17T00:13:46.544423554Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 17 00:13:48.286145 containerd[1473]: time="2025-05-17T00:13:48.286087317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:48.288122 containerd[1473]: time="2025-05-17T00:13:48.288065578Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=18293751" May 17 00:13:48.289223 containerd[1473]: time="2025-05-17T00:13:48.289165675Z" level=info msg="ImageCreate event name:\"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:48.294060 containerd[1473]: time="2025-05-17T00:13:48.293939720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:48.295244 containerd[1473]: time="2025-05-17T00:13:48.294838406Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"19848441\" in 1.750370929s" May 17 00:13:48.295244 containerd[1473]: time="2025-05-17T00:13:48.294876288Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 17 00:13:48.295427 containerd[1473]: time="2025-05-17T00:13:48.295396794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 17 00:13:49.716931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381195083.mount: Deactivated successfully. May 17 00:13:50.308736 containerd[1473]: time="2025-05-17T00:13:50.307930771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:50.308736 containerd[1473]: time="2025-05-17T00:13:50.308701130Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=28196030" May 17 00:13:50.309720 containerd[1473]: time="2025-05-17T00:13:50.309670819Z" level=info msg="ImageCreate event name:\"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:50.312082 containerd[1473]: time="2025-05-17T00:13:50.312048858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:50.312881 containerd[1473]: time="2025-05-17T00:13:50.312846658Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"28195023\" in 2.017416623s" May 17 00:13:50.313014 containerd[1473]: time="2025-05-17T00:13:50.312964704Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 17 00:13:50.313544 containerd[1473]: time="2025-05-17T00:13:50.313519172Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 17 00:13:50.913465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3999381037.mount: Deactivated successfully. May 17 00:13:51.321559 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 17 00:13:51.338406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:13:51.455183 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:13:51.468869 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:13:51.509533 kubelet[2131]: E0517 00:13:51.509469 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:13:51.513221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:13:51.513493 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:13:52.511863 containerd[1473]: time="2025-05-17T00:13:52.510764564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:52.513101 containerd[1473]: time="2025-05-17T00:13:52.513054437Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" May 17 00:13:52.514494 containerd[1473]: time="2025-05-17T00:13:52.514437666Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:52.518440 containerd[1473]: time="2025-05-17T00:13:52.518066165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:52.519897 containerd[1473]: time="2025-05-17T00:13:52.519720287Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.206161593s" May 17 00:13:52.519897 containerd[1473]: time="2025-05-17T00:13:52.519765849Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 17 00:13:52.520507 containerd[1473]: time="2025-05-17T00:13:52.520438442Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:13:53.031180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154057589.mount: Deactivated successfully. May 17 00:13:53.040018 containerd[1473]: time="2025-05-17T00:13:53.039219053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:53.040018 containerd[1473]: time="2025-05-17T00:13:53.039925808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" May 17 00:13:53.040940 containerd[1473]: time="2025-05-17T00:13:53.040859813Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:53.043893 containerd[1473]: time="2025-05-17T00:13:53.043576987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:53.044582 containerd[1473]: time="2025-05-17T00:13:53.044543394Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 524.07303ms" May 17 00:13:53.044677 containerd[1473]: time="2025-05-17T00:13:53.044633238Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 00:13:53.045431 containerd[1473]: time="2025-05-17T00:13:53.045406236Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 17 00:13:57.888040 containerd[1473]: time="2025-05-17T00:13:57.887734826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:57.889873 containerd[1473]: time="2025-05-17T00:13:57.889432063Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69230195" May 17 00:13:57.891233 containerd[1473]: time="2025-05-17T00:13:57.891169061Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:57.896440 containerd[1473]: time="2025-05-17T00:13:57.896363453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:13:57.898062 containerd[1473]: time="2025-05-17T00:13:57.897878851Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 4.852331248s" May 17 00:13:57.898062 containerd[1473]: time="2025-05-17T00:13:57.897922131Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 17 00:14:01.571519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. May 17 00:14:01.579250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:01.733520 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:14:01.735939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:01.798531 kubelet[2186]: E0517 00:14:01.798473 2186 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:14:01.802016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:14:01.802204 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:14:03.523375 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:03.534329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:03.572235 systemd[1]: Reloading requested from client PID 2200 ('systemctl') (unit session-7.scope)... May 17 00:14:03.572255 systemd[1]: Reloading... May 17 00:14:03.694011 zram_generator::config[2252]: No configuration found. May 17 00:14:03.778265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:14:03.852792 systemd[1]: Reloading finished in 280 ms. May 17 00:14:03.910177 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:14:03.910283 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:14:03.910595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:03.918635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:04.039733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:04.051488 (kubelet)[2287]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:14:04.098215 kubelet[2287]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:04.098215 kubelet[2287]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:14:04.098215 kubelet[2287]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:04.098215 kubelet[2287]: I0517 00:14:04.097887 2287 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:14:04.804058 kubelet[2287]: I0517 00:14:04.803118 2287 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:14:04.804058 kubelet[2287]: I0517 00:14:04.803154 2287 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:14:04.804058 kubelet[2287]: I0517 00:14:04.803471 2287 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:14:04.834449 kubelet[2287]: E0517 00:14:04.834293 2287 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://91.99.12.209:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.99.12.209:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 17 00:14:04.837732 kubelet[2287]: I0517 00:14:04.836827 2287 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:14:04.853494 kubelet[2287]: E0517 00:14:04.853436 2287 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:14:04.853725 kubelet[2287]: I0517 00:14:04.853706 2287 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:14:04.856601 kubelet[2287]: I0517 00:14:04.856523 2287 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:14:04.858779 kubelet[2287]: I0517 00:14:04.858647 2287 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:14:04.859034 kubelet[2287]: I0517 00:14:04.858738 2287 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-58e6742ed6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:14:04.859141 kubelet[2287]: I0517 00:14:04.859101 2287 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:14:04.859141 kubelet[2287]: I0517 00:14:04.859115 2287 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:14:04.859410 kubelet[2287]: I0517 00:14:04.859375 2287 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:04.863213 kubelet[2287]: I0517 00:14:04.863051 2287 kubelet.go:480] "Attempting to sync node with API server" May 17 00:14:04.863213 kubelet[2287]: I0517 00:14:04.863088 2287 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:14:04.863213 kubelet[2287]: I0517 00:14:04.863119 2287 kubelet.go:386] "Adding apiserver pod source" May 17 00:14:04.865079 kubelet[2287]: I0517 00:14:04.864678 2287 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:14:04.869813 kubelet[2287]: I0517 00:14:04.869292 2287 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:14:04.870284 kubelet[2287]: I0517 00:14:04.870263 2287 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:14:04.870467 kubelet[2287]: W0517 00:14:04.870455 2287 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:14:04.873246 kubelet[2287]: I0517 00:14:04.873224 2287 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:14:04.873398 kubelet[2287]: I0517 00:14:04.873387 2287 server.go:1289] "Started kubelet" May 17 00:14:04.873716 kubelet[2287]: E0517 00:14:04.873684 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://91.99.12.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-58e6742ed6&limit=500&resourceVersion=0\": dial tcp 91.99.12.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:14:04.877248 kubelet[2287]: E0517 00:14:04.877208 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://91.99.12.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.12.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:14:04.877397 kubelet[2287]: I0517 00:14:04.877283 2287 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:14:04.877786 kubelet[2287]: I0517 00:14:04.877752 2287 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:14:04.879539 kubelet[2287]: I0517 00:14:04.879497 2287 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:14:04.883267 kubelet[2287]: E0517 00:14:04.880682 2287 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.99.12.209:6443/api/v1/namespaces/default/events\": dial tcp 91.99.12.209:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-58e6742ed6.18402830a8af5afc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-58e6742ed6,UID:ci-4081-3-3-n-58e6742ed6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-58e6742ed6,},FirstTimestamp:2025-05-17 00:14:04.873358076 +0000 UTC m=+0.816718814,LastTimestamp:2025-05-17 00:14:04.873358076 +0000 UTC m=+0.816718814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-58e6742ed6,}" May 17 00:14:04.883526 kubelet[2287]: I0517 00:14:04.883488 2287 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:14:04.884688 kubelet[2287]: I0517 00:14:04.884656 2287 server.go:317] "Adding debug handlers to kubelet server" May 17 00:14:04.885809 kubelet[2287]: I0517 00:14:04.885780 2287 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:14:04.886897 kubelet[2287]: I0517 00:14:04.886847 2287 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:14:04.892039 kubelet[2287]: E0517 00:14:04.892007 2287 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:14:04.893058 kubelet[2287]: I0517 00:14:04.893040 2287 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:14:04.893451 kubelet[2287]: E0517 00:14:04.893422 2287 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-58e6742ed6\" not found" May 17 00:14:04.893916 kubelet[2287]: I0517 00:14:04.893894 2287 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:14:04.894704 kubelet[2287]: I0517 00:14:04.894682 2287 reconciler.go:26] "Reconciler: start to sync state" May 17 00:14:04.895855 kubelet[2287]: I0517 00:14:04.895828 2287 factory.go:223] Registration of the systemd container factory successfully May 17 00:14:04.896118 kubelet[2287]: I0517 00:14:04.896092 2287 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:14:04.897584 kubelet[2287]: E0517 00:14:04.897534 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://91.99.12.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.12.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:14:04.897797 kubelet[2287]: E0517 00:14:04.897770 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.12.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-58e6742ed6?timeout=10s\": dial tcp 91.99.12.209:6443: connect: connection refused" interval="200ms" May 17 00:14:04.899205 kubelet[2287]: I0517 00:14:04.899174 2287 factory.go:223] Registration of the containerd container factory successfully May 17 00:14:04.922531 kubelet[2287]: I0517 00:14:04.922096 2287 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:14:04.922531 kubelet[2287]: I0517 00:14:04.922140 2287 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:14:04.922531 kubelet[2287]: I0517 00:14:04.922165 2287 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:14:04.922531 kubelet[2287]: I0517 00:14:04.922173 2287 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:14:04.922531 kubelet[2287]: E0517 00:14:04.922226 2287 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:14:04.924048 kubelet[2287]: E0517 00:14:04.923878 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://91.99.12.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.12.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:14:04.927727 kubelet[2287]: I0517 00:14:04.927685 2287 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:14:04.928344 kubelet[2287]: I0517 00:14:04.928323 2287 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:14:04.928694 kubelet[2287]: I0517 00:14:04.928428 2287 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:04.933464 kubelet[2287]: I0517 00:14:04.933407 2287 policy_none.go:49] "None policy: Start" May 17 00:14:04.933464 kubelet[2287]: I0517 00:14:04.933455 2287 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:14:04.933464 kubelet[2287]: I0517 00:14:04.933477 2287 state_mem.go:35] "Initializing new in-memory state store" May 17 00:14:04.942655 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:14:04.958884 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:14:04.964412 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:14:04.977160 kubelet[2287]: E0517 00:14:04.977078 2287 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:14:04.978405 kubelet[2287]: I0517 00:14:04.978175 2287 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:14:04.978405 kubelet[2287]: I0517 00:14:04.978201 2287 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:14:04.978776 kubelet[2287]: I0517 00:14:04.978745 2287 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:14:04.981452 kubelet[2287]: E0517 00:14:04.981412 2287 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:14:04.981616 kubelet[2287]: E0517 00:14:04.981481 2287 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-58e6742ed6\" not found" May 17 00:14:05.042273 systemd[1]: Created slice kubepods-burstable-podf69e0941ca6d4cdc22ef91a8a629ef00.slice - libcontainer container kubepods-burstable-podf69e0941ca6d4cdc22ef91a8a629ef00.slice. May 17 00:14:05.055635 kubelet[2287]: E0517 00:14:05.055422 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-58e6742ed6\" not found" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.060053 systemd[1]: Created slice kubepods-burstable-poddad06322445562fe9d3e58a1b7ee971e.slice - libcontainer container kubepods-burstable-poddad06322445562fe9d3e58a1b7ee971e.slice. May 17 00:14:05.074600 kubelet[2287]: E0517 00:14:05.074494 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-58e6742ed6\" not found" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.081870 systemd[1]: Created slice kubepods-burstable-podcb4a47b32bd22862af5d7aae1ccc897b.slice - libcontainer container kubepods-burstable-podcb4a47b32bd22862af5d7aae1ccc897b.slice. May 17 00:14:05.084630 kubelet[2287]: I0517 00:14:05.082878 2287 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.084630 kubelet[2287]: E0517 00:14:05.083588 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.12.209:6443/api/v1/nodes\": dial tcp 91.99.12.209:6443: connect: connection refused" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.086788 kubelet[2287]: E0517 00:14:05.086746 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-58e6742ed6\" not found" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.096595 kubelet[2287]: I0517 00:14:05.096527 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f69e0941ca6d4cdc22ef91a8a629ef00-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-58e6742ed6\" (UID: \"f69e0941ca6d4cdc22ef91a8a629ef00\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.096595 kubelet[2287]: I0517 00:14:05.096590 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dad06322445562fe9d3e58a1b7ee971e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-58e6742ed6\" (UID: \"dad06322445562fe9d3e58a1b7ee971e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.096950 kubelet[2287]: I0517 00:14:05.096618 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dad06322445562fe9d3e58a1b7ee971e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-58e6742ed6\" (UID: \"dad06322445562fe9d3e58a1b7ee971e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.096950 kubelet[2287]: I0517 00:14:05.096639 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dad06322445562fe9d3e58a1b7ee971e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-58e6742ed6\" (UID: \"dad06322445562fe9d3e58a1b7ee971e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.096950 kubelet[2287]: I0517 00:14:05.096656 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f69e0941ca6d4cdc22ef91a8a629ef00-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-58e6742ed6\" (UID: \"f69e0941ca6d4cdc22ef91a8a629ef00\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.096950 kubelet[2287]: I0517 00:14:05.096738 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f69e0941ca6d4cdc22ef91a8a629ef00-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-58e6742ed6\" (UID: \"f69e0941ca6d4cdc22ef91a8a629ef00\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.096950 kubelet[2287]: I0517 00:14:05.096762 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dad06322445562fe9d3e58a1b7ee971e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-58e6742ed6\" (UID: \"dad06322445562fe9d3e58a1b7ee971e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.097178 kubelet[2287]: I0517 00:14:05.096783 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dad06322445562fe9d3e58a1b7ee971e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-58e6742ed6\" (UID: \"dad06322445562fe9d3e58a1b7ee971e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.097178 kubelet[2287]: I0517 00:14:05.096801 2287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb4a47b32bd22862af5d7aae1ccc897b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-58e6742ed6\" (UID: \"cb4a47b32bd22862af5d7aae1ccc897b\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.098503 kubelet[2287]: E0517 00:14:05.098461 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.12.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-58e6742ed6?timeout=10s\": dial tcp 91.99.12.209:6443: connect: connection refused" interval="400ms" May 17 00:14:05.286945 kubelet[2287]: I0517 00:14:05.286864 2287 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.287633 kubelet[2287]: E0517 00:14:05.287517 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.12.209:6443/api/v1/nodes\": dial tcp 91.99.12.209:6443: connect: connection refused" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.360524 containerd[1473]: time="2025-05-17T00:14:05.360059178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-58e6742ed6,Uid:f69e0941ca6d4cdc22ef91a8a629ef00,Namespace:kube-system,Attempt:0,}" May 17 00:14:05.376391 containerd[1473]: time="2025-05-17T00:14:05.376285776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-58e6742ed6,Uid:dad06322445562fe9d3e58a1b7ee971e,Namespace:kube-system,Attempt:0,}" May 17 00:14:05.388333 containerd[1473]: time="2025-05-17T00:14:05.388027102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-58e6742ed6,Uid:cb4a47b32bd22862af5d7aae1ccc897b,Namespace:kube-system,Attempt:0,}" May 17 00:14:05.499715 kubelet[2287]: E0517 00:14:05.499636 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.12.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-58e6742ed6?timeout=10s\": dial tcp 91.99.12.209:6443: connect: connection refused" interval="800ms" May 17 00:14:05.689446 kubelet[2287]: I0517 00:14:05.689335 2287 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.689847 kubelet[2287]: E0517 00:14:05.689723 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.12.209:6443/api/v1/nodes\": dial tcp 91.99.12.209:6443: connect: connection refused" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:05.770202 kubelet[2287]: E0517 00:14:05.770080 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://91.99.12.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-58e6742ed6&limit=500&resourceVersion=0\": dial tcp 91.99.12.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 17 00:14:05.888973 kubelet[2287]: E0517 00:14:05.888897 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://91.99.12.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.99.12.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 17 00:14:05.932962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4069123164.mount: Deactivated successfully. May 17 00:14:05.940623 containerd[1473]: time="2025-05-17T00:14:05.940428985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:05.942186 containerd[1473]: time="2025-05-17T00:14:05.942135238Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" May 17 00:14:05.944866 containerd[1473]: time="2025-05-17T00:14:05.944742217Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:05.946548 containerd[1473]: time="2025-05-17T00:14:05.946379509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:14:05.947094 containerd[1473]: time="2025-05-17T00:14:05.947006314Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:05.948794 containerd[1473]: time="2025-05-17T00:14:05.948688806Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:14:05.949424 containerd[1473]: time="2025-05-17T00:14:05.949024608Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:05.953321 containerd[1473]: time="2025-05-17T00:14:05.953269959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:14:05.954635 containerd[1473]: time="2025-05-17T00:14:05.954588369Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.138512ms" May 17 00:14:05.956052 containerd[1473]: time="2025-05-17T00:14:05.956003059Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.865996ms" May 17 00:14:05.957154 containerd[1473]: time="2025-05-17T00:14:05.957103427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 596.923209ms" May 17 00:14:06.042030 kubelet[2287]: E0517 00:14:06.041608 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://91.99.12.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.99.12.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 17 00:14:06.089256 containerd[1473]: time="2025-05-17T00:14:06.088444112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:06.089256 containerd[1473]: time="2025-05-17T00:14:06.088969357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:06.089256 containerd[1473]: time="2025-05-17T00:14:06.089024837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:06.090794 containerd[1473]: time="2025-05-17T00:14:06.090354768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:06.093748 containerd[1473]: time="2025-05-17T00:14:06.093613915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:06.094253 containerd[1473]: time="2025-05-17T00:14:06.094125319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:06.094253 containerd[1473]: time="2025-05-17T00:14:06.094224600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:06.094710 containerd[1473]: time="2025-05-17T00:14:06.094516602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:06.094710 containerd[1473]: time="2025-05-17T00:14:06.094589523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:06.094710 containerd[1473]: time="2025-05-17T00:14:06.094602083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:06.095303 containerd[1473]: time="2025-05-17T00:14:06.095152048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:06.095303 containerd[1473]: time="2025-05-17T00:14:06.095084247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:06.120224 systemd[1]: Started cri-containerd-1eb5c40a7767e65c4a6aab49d947e54ad0c77b7e9dbbebfbe23eec1cfeaa3d37.scope - libcontainer container 1eb5c40a7767e65c4a6aab49d947e54ad0c77b7e9dbbebfbe23eec1cfeaa3d37. May 17 00:14:06.128210 systemd[1]: Started cri-containerd-6d56ceda7bd539fbc9e718de7e0dc6ca6b07af11b11a4e6ca0c5c4d93dc63aeb.scope - libcontainer container 6d56ceda7bd539fbc9e718de7e0dc6ca6b07af11b11a4e6ca0c5c4d93dc63aeb. May 17 00:14:06.129904 systemd[1]: Started cri-containerd-b85eb4161a937f7f728caac4ea5574be1f3269eba6bd6ae40a91a2482fd92c54.scope - libcontainer container b85eb4161a937f7f728caac4ea5574be1f3269eba6bd6ae40a91a2482fd92c54. May 17 00:14:06.189162 containerd[1473]: time="2025-05-17T00:14:06.189115225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-58e6742ed6,Uid:f69e0941ca6d4cdc22ef91a8a629ef00,Namespace:kube-system,Attempt:0,} returns sandbox id \"1eb5c40a7767e65c4a6aab49d947e54ad0c77b7e9dbbebfbe23eec1cfeaa3d37\"" May 17 00:14:06.197253 containerd[1473]: time="2025-05-17T00:14:06.196273284Z" level=info msg="CreateContainer within sandbox \"1eb5c40a7767e65c4a6aab49d947e54ad0c77b7e9dbbebfbe23eec1cfeaa3d37\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:14:06.201199 containerd[1473]: time="2025-05-17T00:14:06.201148485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-58e6742ed6,Uid:dad06322445562fe9d3e58a1b7ee971e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d56ceda7bd539fbc9e718de7e0dc6ca6b07af11b11a4e6ca0c5c4d93dc63aeb\"" May 17 00:14:06.201351 containerd[1473]: time="2025-05-17T00:14:06.201296206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-58e6742ed6,Uid:cb4a47b32bd22862af5d7aae1ccc897b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b85eb4161a937f7f728caac4ea5574be1f3269eba6bd6ae40a91a2482fd92c54\"" May 17 00:14:06.208170 containerd[1473]: time="2025-05-17T00:14:06.207963941Z" level=info msg="CreateContainer within sandbox \"6d56ceda7bd539fbc9e718de7e0dc6ca6b07af11b11a4e6ca0c5c4d93dc63aeb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:14:06.211025 containerd[1473]: time="2025-05-17T00:14:06.210897485Z" level=info msg="CreateContainer within sandbox \"b85eb4161a937f7f728caac4ea5574be1f3269eba6bd6ae40a91a2482fd92c54\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:14:06.232726 containerd[1473]: time="2025-05-17T00:14:06.232669585Z" level=info msg="CreateContainer within sandbox \"1eb5c40a7767e65c4a6aab49d947e54ad0c77b7e9dbbebfbe23eec1cfeaa3d37\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7d94b62d15a85581ddaeab9504d1550dd0fba9c911b8fd12b81646490bd14fd4\"" May 17 00:14:06.234108 containerd[1473]: time="2025-05-17T00:14:06.234056677Z" level=info msg="StartContainer for \"7d94b62d15a85581ddaeab9504d1550dd0fba9c911b8fd12b81646490bd14fd4\"" May 17 00:14:06.238748 containerd[1473]: time="2025-05-17T00:14:06.238611675Z" level=info msg="CreateContainer within sandbox \"6d56ceda7bd539fbc9e718de7e0dc6ca6b07af11b11a4e6ca0c5c4d93dc63aeb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5f9b6df31792ced96994a5efadaf5977e42f2b957875d70cccae4e7b7afca247\"" May 17 00:14:06.239020 containerd[1473]: time="2025-05-17T00:14:06.238885277Z" level=info msg="CreateContainer within sandbox \"b85eb4161a937f7f728caac4ea5574be1f3269eba6bd6ae40a91a2482fd92c54\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a5115c08aee12d20507fa9dc5c729f15022abc6ccd1ddaa21bb1300c0fb7b395\"" May 17 00:14:06.239681 containerd[1473]: time="2025-05-17T00:14:06.239640923Z" level=info msg="StartContainer for \"5f9b6df31792ced96994a5efadaf5977e42f2b957875d70cccae4e7b7afca247\"" May 17 00:14:06.242007 containerd[1473]: time="2025-05-17T00:14:06.240411609Z" level=info msg="StartContainer for \"a5115c08aee12d20507fa9dc5c729f15022abc6ccd1ddaa21bb1300c0fb7b395\"" May 17 00:14:06.281220 systemd[1]: Started cri-containerd-7d94b62d15a85581ddaeab9504d1550dd0fba9c911b8fd12b81646490bd14fd4.scope - libcontainer container 7d94b62d15a85581ddaeab9504d1550dd0fba9c911b8fd12b81646490bd14fd4. May 17 00:14:06.289218 systemd[1]: Started cri-containerd-5f9b6df31792ced96994a5efadaf5977e42f2b957875d70cccae4e7b7afca247.scope - libcontainer container 5f9b6df31792ced96994a5efadaf5977e42f2b957875d70cccae4e7b7afca247. May 17 00:14:06.301376 kubelet[2287]: E0517 00:14:06.301225 2287 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.99.12.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-58e6742ed6?timeout=10s\": dial tcp 91.99.12.209:6443: connect: connection refused" interval="1.6s" May 17 00:14:06.305305 systemd[1]: Started cri-containerd-a5115c08aee12d20507fa9dc5c729f15022abc6ccd1ddaa21bb1300c0fb7b395.scope - libcontainer container a5115c08aee12d20507fa9dc5c729f15022abc6ccd1ddaa21bb1300c0fb7b395. May 17 00:14:06.348482 containerd[1473]: time="2025-05-17T00:14:06.348068500Z" level=info msg="StartContainer for \"7d94b62d15a85581ddaeab9504d1550dd0fba9c911b8fd12b81646490bd14fd4\" returns successfully" May 17 00:14:06.376136 containerd[1473]: time="2025-05-17T00:14:06.375882730Z" level=info msg="StartContainer for \"5f9b6df31792ced96994a5efadaf5977e42f2b957875d70cccae4e7b7afca247\" returns successfully" May 17 00:14:06.380232 kubelet[2287]: E0517 00:14:06.379503 2287 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://91.99.12.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.99.12.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 17 00:14:06.383920 containerd[1473]: time="2025-05-17T00:14:06.383867516Z" level=info msg="StartContainer for \"a5115c08aee12d20507fa9dc5c729f15022abc6ccd1ddaa21bb1300c0fb7b395\" returns successfully" May 17 00:14:06.493783 kubelet[2287]: I0517 00:14:06.493686 2287 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:06.496062 kubelet[2287]: E0517 00:14:06.495400 2287 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.99.12.209:6443/api/v1/nodes\": dial tcp 91.99.12.209:6443: connect: connection refused" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:06.937202 kubelet[2287]: E0517 00:14:06.935915 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-58e6742ed6\" not found" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:06.940967 kubelet[2287]: E0517 00:14:06.939827 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-58e6742ed6\" not found" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:06.948363 kubelet[2287]: E0517 00:14:06.948159 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-58e6742ed6\" not found" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:07.950395 kubelet[2287]: E0517 00:14:07.949801 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-58e6742ed6\" not found" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:07.950395 kubelet[2287]: E0517 00:14:07.950186 2287 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-58e6742ed6\" not found" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:08.100031 kubelet[2287]: I0517 00:14:08.099224 2287 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:08.879680 kubelet[2287]: I0517 00:14:08.879288 2287 apiserver.go:52] "Watching apiserver" May 17 00:14:08.896430 kubelet[2287]: I0517 00:14:08.896119 2287 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:14:08.896430 kubelet[2287]: E0517 00:14:08.896381 2287 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-58e6742ed6\" not found" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:08.903027 kubelet[2287]: E0517 00:14:08.902767 2287 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-3-n-58e6742ed6.18402830a8af5afc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-58e6742ed6,UID:ci-4081-3-3-n-58e6742ed6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-58e6742ed6,},FirstTimestamp:2025-05-17 00:14:04.873358076 +0000 UTC m=+0.816718814,LastTimestamp:2025-05-17 00:14:04.873358076 +0000 UTC m=+0.816718814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-58e6742ed6,}" May 17 00:14:08.958534 kubelet[2287]: I0517 00:14:08.958391 2287 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:08.963125 kubelet[2287]: E0517 00:14:08.962159 2287 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-3-n-58e6742ed6.18402830a9cb31ca default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-58e6742ed6,UID:ci-4081-3-3-n-58e6742ed6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-58e6742ed6,},FirstTimestamp:2025-05-17 00:14:04.891959754 +0000 UTC m=+0.835320452,LastTimestamp:2025-05-17 00:14:04.891959754 +0000 UTC m=+0.835320452,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-58e6742ed6,}" May 17 00:14:08.993621 kubelet[2287]: I0517 00:14:08.993565 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-58e6742ed6" May 17 00:14:09.049179 kubelet[2287]: E0517 00:14:09.048646 2287 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-3-n-58e6742ed6.18402830abdc97f6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-58e6742ed6,UID:ci-4081-3-3-n-58e6742ed6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-3-3-n-58e6742ed6 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-58e6742ed6,},FirstTimestamp:2025-05-17 00:14:04.926654454 +0000 UTC m=+0.870015152,LastTimestamp:2025-05-17 00:14:04.926654454 +0000 UTC m=+0.870015152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-58e6742ed6,}" May 17 00:14:09.067841 kubelet[2287]: E0517 00:14:09.067601 2287 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-n-58e6742ed6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-3-n-58e6742ed6" May 17 00:14:09.067841 kubelet[2287]: I0517 00:14:09.067643 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" May 17 00:14:09.074058 kubelet[2287]: E0517 00:14:09.073743 2287 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-58e6742ed6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" May 17 00:14:09.074058 kubelet[2287]: I0517 00:14:09.073780 2287 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:09.081054 kubelet[2287]: E0517 00:14:09.080808 2287 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-n-58e6742ed6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:11.572460 systemd[1]: Reloading requested from client PID 2571 ('systemctl') (unit session-7.scope)... May 17 00:14:11.572479 systemd[1]: Reloading... May 17 00:14:11.683020 zram_generator::config[2610]: No configuration found. May 17 00:14:11.803864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:14:11.914548 systemd[1]: Reloading finished in 341 ms. May 17 00:14:11.968578 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:11.986282 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:14:11.986572 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:11.986636 systemd[1]: kubelet.service: Consumed 1.312s CPU time, 130.3M memory peak, 0B memory swap peak. May 17 00:14:11.995778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:14:12.126513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:14:12.140694 (kubelet)[2656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:14:12.198710 kubelet[2656]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:12.198710 kubelet[2656]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:14:12.198710 kubelet[2656]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:14:12.198710 kubelet[2656]: I0517 00:14:12.198416 2656 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:14:12.205477 kubelet[2656]: I0517 00:14:12.205303 2656 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 17 00:14:12.205477 kubelet[2656]: I0517 00:14:12.205347 2656 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:14:12.205784 kubelet[2656]: I0517 00:14:12.205696 2656 server.go:956] "Client rotation is on, will bootstrap in background" May 17 00:14:12.207945 kubelet[2656]: I0517 00:14:12.207253 2656 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 17 00:14:12.218855 kubelet[2656]: I0517 00:14:12.218809 2656 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:14:12.225343 kubelet[2656]: E0517 00:14:12.225166 2656 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:14:12.227485 kubelet[2656]: I0517 00:14:12.225905 2656 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:14:12.228995 kubelet[2656]: I0517 00:14:12.228924 2656 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:14:12.229267 kubelet[2656]: I0517 00:14:12.229191 2656 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:14:12.229450 kubelet[2656]: I0517 00:14:12.229218 2656 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-58e6742ed6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:14:12.229450 kubelet[2656]: I0517 00:14:12.229402 2656 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:14:12.229450 kubelet[2656]: I0517 00:14:12.229412 2656 container_manager_linux.go:303] "Creating device plugin manager" May 17 00:14:12.229645 kubelet[2656]: I0517 00:14:12.229456 2656 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:12.232747 kubelet[2656]: I0517 00:14:12.229686 2656 kubelet.go:480] "Attempting to sync node with API server" May 17 00:14:12.232747 kubelet[2656]: I0517 00:14:12.229705 2656 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:14:12.232747 kubelet[2656]: I0517 00:14:12.229734 2656 kubelet.go:386] "Adding apiserver pod source" May 17 00:14:12.232747 kubelet[2656]: I0517 00:14:12.229750 2656 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:14:12.234045 kubelet[2656]: I0517 00:14:12.233637 2656 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:14:12.235993 kubelet[2656]: I0517 00:14:12.235301 2656 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 17 00:14:12.241296 kubelet[2656]: I0517 00:14:12.241261 2656 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:14:12.241418 kubelet[2656]: I0517 00:14:12.241312 2656 server.go:1289] "Started kubelet" May 17 00:14:12.243723 kubelet[2656]: I0517 00:14:12.243683 2656 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:14:12.255995 kubelet[2656]: I0517 00:14:12.254739 2656 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:14:12.255995 kubelet[2656]: I0517 00:14:12.255831 2656 server.go:317] "Adding debug handlers to kubelet server" May 17 00:14:12.262050 kubelet[2656]: I0517 00:14:12.261909 2656 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:14:12.262343 kubelet[2656]: I0517 00:14:12.262319 2656 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:14:12.262666 kubelet[2656]: I0517 00:14:12.262630 2656 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:14:12.269022 kubelet[2656]: I0517 00:14:12.267364 2656 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:14:12.272122 kubelet[2656]: I0517 00:14:12.271692 2656 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:14:12.272122 kubelet[2656]: I0517 00:14:12.271937 2656 reconciler.go:26] "Reconciler: start to sync state" May 17 00:14:12.273079 kubelet[2656]: I0517 00:14:12.273041 2656 factory.go:223] Registration of the systemd container factory successfully May 17 00:14:12.273962 kubelet[2656]: I0517 00:14:12.273391 2656 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:14:12.288100 kubelet[2656]: I0517 00:14:12.288043 2656 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 17 00:14:12.288744 kubelet[2656]: I0517 00:14:12.288473 2656 factory.go:223] Registration of the containerd container factory successfully May 17 00:14:12.289673 kubelet[2656]: I0517 00:14:12.289630 2656 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 17 00:14:12.289673 kubelet[2656]: I0517 00:14:12.289664 2656 status_manager.go:230] "Starting to sync pod status with apiserver" May 17 00:14:12.289792 kubelet[2656]: I0517 00:14:12.289692 2656 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:14:12.289792 kubelet[2656]: I0517 00:14:12.289700 2656 kubelet.go:2436] "Starting kubelet main sync loop" May 17 00:14:12.289792 kubelet[2656]: E0517 00:14:12.289745 2656 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:14:12.347908 kubelet[2656]: I0517 00:14:12.347879 2656 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:14:12.348206 kubelet[2656]: I0517 00:14:12.348193 2656 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:14:12.348335 kubelet[2656]: I0517 00:14:12.348324 2656 state_mem.go:36] "Initialized new in-memory state store" May 17 00:14:12.348635 kubelet[2656]: I0517 00:14:12.348609 2656 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:14:12.348955 kubelet[2656]: I0517 00:14:12.348726 2656 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:14:12.348955 kubelet[2656]: I0517 00:14:12.348759 2656 policy_none.go:49] "None policy: Start" May 17 00:14:12.348955 kubelet[2656]: I0517 00:14:12.348770 2656 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:14:12.348955 kubelet[2656]: I0517 00:14:12.348782 2656 state_mem.go:35] "Initializing new in-memory state store" May 17 00:14:12.348955 kubelet[2656]: I0517 00:14:12.348881 2656 state_mem.go:75] "Updated machine memory state" May 17 00:14:12.358212 kubelet[2656]: E0517 00:14:12.358183 2656 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 17 00:14:12.359394 kubelet[2656]: I0517 00:14:12.359035 2656 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:14:12.359394 kubelet[2656]: I0517 00:14:12.359055 2656 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:14:12.359394 kubelet[2656]: I0517 00:14:12.359343 2656 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:14:12.362011 kubelet[2656]: E0517 00:14:12.361397 2656 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:14:12.391160 kubelet[2656]: I0517 00:14:12.391113 2656 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.392746 kubelet[2656]: I0517 00:14:12.391622 2656 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.393359 kubelet[2656]: I0517 00:14:12.393320 2656 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.471007 kubelet[2656]: I0517 00:14:12.469938 2656 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.473157 kubelet[2656]: I0517 00:14:12.472292 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f69e0941ca6d4cdc22ef91a8a629ef00-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-58e6742ed6\" (UID: \"f69e0941ca6d4cdc22ef91a8a629ef00\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.473157 kubelet[2656]: I0517 00:14:12.472330 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f69e0941ca6d4cdc22ef91a8a629ef00-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-58e6742ed6\" (UID: \"f69e0941ca6d4cdc22ef91a8a629ef00\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.473157 kubelet[2656]: I0517 00:14:12.472353 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dad06322445562fe9d3e58a1b7ee971e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-58e6742ed6\" (UID: \"dad06322445562fe9d3e58a1b7ee971e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.473157 kubelet[2656]: I0517 00:14:12.472370 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dad06322445562fe9d3e58a1b7ee971e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-58e6742ed6\" (UID: \"dad06322445562fe9d3e58a1b7ee971e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.473157 kubelet[2656]: I0517 00:14:12.472387 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dad06322445562fe9d3e58a1b7ee971e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-58e6742ed6\" (UID: \"dad06322445562fe9d3e58a1b7ee971e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.473376 kubelet[2656]: I0517 00:14:12.472403 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb4a47b32bd22862af5d7aae1ccc897b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-58e6742ed6\" (UID: \"cb4a47b32bd22862af5d7aae1ccc897b\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.473376 kubelet[2656]: I0517 00:14:12.472420 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f69e0941ca6d4cdc22ef91a8a629ef00-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-58e6742ed6\" (UID: \"f69e0941ca6d4cdc22ef91a8a629ef00\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.473376 kubelet[2656]: I0517 00:14:12.472436 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dad06322445562fe9d3e58a1b7ee971e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-58e6742ed6\" (UID: \"dad06322445562fe9d3e58a1b7ee971e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.473376 kubelet[2656]: I0517 00:14:12.472460 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dad06322445562fe9d3e58a1b7ee971e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-58e6742ed6\" (UID: \"dad06322445562fe9d3e58a1b7ee971e\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.488516 kubelet[2656]: I0517 00:14:12.488109 2656 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.488516 kubelet[2656]: I0517 00:14:12.488211 2656 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-58e6742ed6" May 17 00:14:12.564388 sudo[2691]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:14:12.564901 sudo[2691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 17 00:14:13.044467 sudo[2691]: pam_unix(sudo:session): session closed for user root May 17 00:14:13.230784 kubelet[2656]: I0517 00:14:13.230730 2656 apiserver.go:52] "Watching apiserver" May 17 00:14:13.274605 kubelet[2656]: I0517 00:14:13.272368 2656 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:14:13.320557 kubelet[2656]: I0517 00:14:13.319244 2656 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" May 17 00:14:13.320557 kubelet[2656]: I0517 00:14:13.319526 2656 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-58e6742ed6" May 17 00:14:13.345613 kubelet[2656]: E0517 00:14:13.345191 2656 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-58e6742ed6\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" May 17 00:14:13.346627 kubelet[2656]: E0517 00:14:13.346105 2656 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-n-58e6742ed6\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-3-n-58e6742ed6" May 17 00:14:13.385169 kubelet[2656]: I0517 00:14:13.385062 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-58e6742ed6" podStartSLOduration=1.38503453 podStartE2EDuration="1.38503453s" podCreationTimestamp="2025-05-17 00:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:13.364667402 +0000 UTC m=+1.218741934" watchObservedRunningTime="2025-05-17 00:14:13.38503453 +0000 UTC m=+1.239109062" May 17 00:14:13.388145 kubelet[2656]: I0517 00:14:13.385524 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-58e6742ed6" podStartSLOduration=1.385510577 podStartE2EDuration="1.385510577s" podCreationTimestamp="2025-05-17 00:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:13.38079571 +0000 UTC m=+1.234870242" watchObservedRunningTime="2025-05-17 00:14:13.385510577 +0000 UTC m=+1.239585109" May 17 00:14:13.415511 kubelet[2656]: I0517 00:14:13.415356 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-58e6742ed6" podStartSLOduration=1.4153396 podStartE2EDuration="1.4153396s" podCreationTimestamp="2025-05-17 00:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:13.399441655 +0000 UTC m=+1.253516147" watchObservedRunningTime="2025-05-17 00:14:13.4153396 +0000 UTC m=+1.269414092" May 17 00:14:15.685934 sudo[1835]: pam_unix(sudo:session): session closed for user root May 17 00:14:15.847836 sshd[1832]: pam_unix(sshd:session): session closed for user core May 17 00:14:15.854532 systemd[1]: sshd@6-91.99.12.209:22-139.178.68.195:45044.service: Deactivated successfully. May 17 00:14:15.854731 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. May 17 00:14:15.860621 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:14:15.862119 systemd[1]: session-7.scope: Consumed 8.631s CPU time, 155.5M memory peak, 0B memory swap peak. May 17 00:14:15.864426 systemd-logind[1458]: Removed session 7. May 17 00:14:16.580857 kubelet[2656]: I0517 00:14:16.580823 2656 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:14:16.581998 containerd[1473]: time="2025-05-17T00:14:16.581771543Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:14:16.583360 kubelet[2656]: I0517 00:14:16.582195 2656 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:14:17.572253 systemd[1]: Created slice kubepods-besteffort-pod25a1029a_c76c_4cd7_8e85_84a11ebffafc.slice - libcontainer container kubepods-besteffort-pod25a1029a_c76c_4cd7_8e85_84a11ebffafc.slice. May 17 00:14:17.597841 systemd[1]: Created slice kubepods-burstable-pod8905af95_e457_422d_8261_d744722d97d0.slice - libcontainer container kubepods-burstable-pod8905af95_e457_422d_8261_d744722d97d0.slice. May 17 00:14:17.601887 kubelet[2656]: I0517 00:14:17.601838 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-lib-modules\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.602396 kubelet[2656]: I0517 00:14:17.602012 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-host-proc-sys-kernel\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.602396 kubelet[2656]: I0517 00:14:17.602051 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-bpf-maps\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.602396 kubelet[2656]: I0517 00:14:17.602068 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-etc-cni-netd\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.602396 kubelet[2656]: I0517 00:14:17.602083 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-host-proc-sys-net\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.602396 kubelet[2656]: I0517 00:14:17.602101 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8905af95-e457-422d-8261-d744722d97d0-hubble-tls\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.602796 kubelet[2656]: I0517 00:14:17.602756 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-hostproc\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.602861 kubelet[2656]: I0517 00:14:17.602832 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cni-path\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.602861 kubelet[2656]: I0517 00:14:17.602850 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8905af95-e457-422d-8261-d744722d97d0-cilium-config-path\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.602930 kubelet[2656]: I0517 00:14:17.602909 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25a1029a-c76c-4cd7-8e85-84a11ebffafc-kube-proxy\") pod \"kube-proxy-t8ckj\" (UID: \"25a1029a-c76c-4cd7-8e85-84a11ebffafc\") " pod="kube-system/kube-proxy-t8ckj" May 17 00:14:17.602965 kubelet[2656]: I0517 00:14:17.602931 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cilium-cgroup\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.602965 kubelet[2656]: I0517 00:14:17.602948 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-xtables-lock\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.603047 kubelet[2656]: I0517 00:14:17.603026 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8905af95-e457-422d-8261-d744722d97d0-clustermesh-secrets\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.603092 kubelet[2656]: I0517 00:14:17.603050 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g68np\" (UniqueName: \"kubernetes.io/projected/8905af95-e457-422d-8261-d744722d97d0-kube-api-access-g68np\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.603120 kubelet[2656]: I0517 00:14:17.603103 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25a1029a-c76c-4cd7-8e85-84a11ebffafc-xtables-lock\") pod \"kube-proxy-t8ckj\" (UID: \"25a1029a-c76c-4cd7-8e85-84a11ebffafc\") " pod="kube-system/kube-proxy-t8ckj" May 17 00:14:17.603434 kubelet[2656]: I0517 00:14:17.603119 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25a1029a-c76c-4cd7-8e85-84a11ebffafc-lib-modules\") pod \"kube-proxy-t8ckj\" (UID: \"25a1029a-c76c-4cd7-8e85-84a11ebffafc\") " pod="kube-system/kube-proxy-t8ckj" May 17 00:14:17.603434 kubelet[2656]: I0517 00:14:17.603161 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8mcf\" (UniqueName: \"kubernetes.io/projected/25a1029a-c76c-4cd7-8e85-84a11ebffafc-kube-api-access-g8mcf\") pod \"kube-proxy-t8ckj\" (UID: \"25a1029a-c76c-4cd7-8e85-84a11ebffafc\") " pod="kube-system/kube-proxy-t8ckj" May 17 00:14:17.603434 kubelet[2656]: I0517 00:14:17.603178 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cilium-run\") pod \"cilium-lrlkk\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " pod="kube-system/cilium-lrlkk" May 17 00:14:17.815021 systemd[1]: Created slice kubepods-besteffort-pod2079f51c_d281_4f28_849b_b7e64c8f6c66.slice - libcontainer container kubepods-besteffort-pod2079f51c_d281_4f28_849b_b7e64c8f6c66.slice. May 17 00:14:17.885201 containerd[1473]: time="2025-05-17T00:14:17.884399554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8ckj,Uid:25a1029a-c76c-4cd7-8e85-84a11ebffafc,Namespace:kube-system,Attempt:0,}" May 17 00:14:17.903667 containerd[1473]: time="2025-05-17T00:14:17.903239754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lrlkk,Uid:8905af95-e457-422d-8261-d744722d97d0,Namespace:kube-system,Attempt:0,}" May 17 00:14:17.908438 kubelet[2656]: I0517 00:14:17.908376 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr8b2\" (UniqueName: \"kubernetes.io/projected/2079f51c-d281-4f28-849b-b7e64c8f6c66-kube-api-access-dr8b2\") pod \"cilium-operator-6c4d7847fc-5tglz\" (UID: \"2079f51c-d281-4f28-849b-b7e64c8f6c66\") " pod="kube-system/cilium-operator-6c4d7847fc-5tglz" May 17 00:14:17.908615 kubelet[2656]: I0517 00:14:17.908441 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2079f51c-d281-4f28-849b-b7e64c8f6c66-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-5tglz\" (UID: \"2079f51c-d281-4f28-849b-b7e64c8f6c66\") " pod="kube-system/cilium-operator-6c4d7847fc-5tglz" May 17 00:14:17.920852 containerd[1473]: time="2025-05-17T00:14:17.920008519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:17.920852 containerd[1473]: time="2025-05-17T00:14:17.920081600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:17.920852 containerd[1473]: time="2025-05-17T00:14:17.920116361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:17.920852 containerd[1473]: time="2025-05-17T00:14:17.920215642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:17.942334 containerd[1473]: time="2025-05-17T00:14:17.942068054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:17.942718 containerd[1473]: time="2025-05-17T00:14:17.942266177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:17.942718 containerd[1473]: time="2025-05-17T00:14:17.942574662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:17.943184 containerd[1473]: time="2025-05-17T00:14:17.943010590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:17.946728 systemd[1]: Started cri-containerd-ad688e09f30db4fd2d31f48289102e8652f03f180290ce00f530d6c6585fe392.scope - libcontainer container ad688e09f30db4fd2d31f48289102e8652f03f180290ce00f530d6c6585fe392. May 17 00:14:17.973226 systemd[1]: Started cri-containerd-9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6.scope - libcontainer container 9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6. May 17 00:14:17.984394 containerd[1473]: time="2025-05-17T00:14:17.984271571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t8ckj,Uid:25a1029a-c76c-4cd7-8e85-84a11ebffafc,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad688e09f30db4fd2d31f48289102e8652f03f180290ce00f530d6c6585fe392\"" May 17 00:14:17.995730 containerd[1473]: time="2025-05-17T00:14:17.995502722Z" level=info msg="CreateContainer within sandbox \"ad688e09f30db4fd2d31f48289102e8652f03f180290ce00f530d6c6585fe392\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:14:18.008808 containerd[1473]: time="2025-05-17T00:14:18.008588629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lrlkk,Uid:8905af95-e457-422d-8261-d744722d97d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\"" May 17 00:14:18.013513 containerd[1473]: time="2025-05-17T00:14:18.013339953Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:14:18.029209 containerd[1473]: time="2025-05-17T00:14:18.028971749Z" level=info msg="CreateContainer within sandbox \"ad688e09f30db4fd2d31f48289102e8652f03f180290ce00f530d6c6585fe392\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3024bf9e01622c216f4b0e452b00720d673b9948745a02e685d696e9d641640e\"" May 17 00:14:18.033896 containerd[1473]: time="2025-05-17T00:14:18.032958139Z" level=info msg="StartContainer for \"3024bf9e01622c216f4b0e452b00720d673b9948745a02e685d696e9d641640e\"" May 17 00:14:18.074292 systemd[1]: Started cri-containerd-3024bf9e01622c216f4b0e452b00720d673b9948745a02e685d696e9d641640e.scope - libcontainer container 3024bf9e01622c216f4b0e452b00720d673b9948745a02e685d696e9d641640e. May 17 00:14:18.107526 containerd[1473]: time="2025-05-17T00:14:18.107438333Z" level=info msg="StartContainer for \"3024bf9e01622c216f4b0e452b00720d673b9948745a02e685d696e9d641640e\" returns successfully" May 17 00:14:18.140556 containerd[1473]: time="2025-05-17T00:14:18.140413035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5tglz,Uid:2079f51c-d281-4f28-849b-b7e64c8f6c66,Namespace:kube-system,Attempt:0,}" May 17 00:14:18.176190 containerd[1473]: time="2025-05-17T00:14:18.175612656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:18.176190 containerd[1473]: time="2025-05-17T00:14:18.175842620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:18.176190 containerd[1473]: time="2025-05-17T00:14:18.175870260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:18.176537 containerd[1473]: time="2025-05-17T00:14:18.176213986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:18.195226 systemd[1]: Started cri-containerd-a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f.scope - libcontainer container a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f. May 17 00:14:18.257880 containerd[1473]: time="2025-05-17T00:14:18.257528181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5tglz,Uid:2079f51c-d281-4f28-849b-b7e64c8f6c66,Namespace:kube-system,Attempt:0,} returns sandbox id \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\"" May 17 00:14:19.495310 kubelet[2656]: I0517 00:14:19.495165 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t8ckj" podStartSLOduration=2.495147608 podStartE2EDuration="2.495147608s" podCreationTimestamp="2025-05-17 00:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:18.350917309 +0000 UTC m=+6.204991801" watchObservedRunningTime="2025-05-17 00:14:19.495147608 +0000 UTC m=+7.349222100" May 17 00:14:21.599584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212372614.mount: Deactivated successfully. May 17 00:14:23.023415 containerd[1473]: time="2025-05-17T00:14:23.023331851Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:23.025160 containerd[1473]: time="2025-05-17T00:14:23.024649798Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 17 00:14:23.027377 containerd[1473]: time="2025-05-17T00:14:23.027225611Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:23.029539 containerd[1473]: time="2025-05-17T00:14:23.029380455Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.015985421s" May 17 00:14:23.029539 containerd[1473]: time="2025-05-17T00:14:23.029426416Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 17 00:14:23.031284 containerd[1473]: time="2025-05-17T00:14:23.031039330Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:14:23.036661 containerd[1473]: time="2025-05-17T00:14:23.035625664Z" level=info msg="CreateContainer within sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:14:23.055252 containerd[1473]: time="2025-05-17T00:14:23.055160867Z" level=info msg="CreateContainer within sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1\"" May 17 00:14:23.059602 containerd[1473]: time="2025-05-17T00:14:23.058317892Z" level=info msg="StartContainer for \"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1\"" May 17 00:14:23.102452 systemd[1]: Started cri-containerd-2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1.scope - libcontainer container 2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1. May 17 00:14:23.138285 containerd[1473]: time="2025-05-17T00:14:23.138237539Z" level=info msg="StartContainer for \"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1\" returns successfully" May 17 00:14:23.151061 systemd[1]: cri-containerd-2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1.scope: Deactivated successfully. May 17 00:14:23.326929 containerd[1473]: time="2025-05-17T00:14:23.326806745Z" level=info msg="shim disconnected" id=2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1 namespace=k8s.io May 17 00:14:23.327519 containerd[1473]: time="2025-05-17T00:14:23.326904667Z" level=warning msg="cleaning up after shim disconnected" id=2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1 namespace=k8s.io May 17 00:14:23.327519 containerd[1473]: time="2025-05-17T00:14:23.327281515Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:23.361863 containerd[1473]: time="2025-05-17T00:14:23.361552421Z" level=info msg="CreateContainer within sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:14:23.377260 containerd[1473]: time="2025-05-17T00:14:23.377170903Z" level=info msg="CreateContainer within sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0\"" May 17 00:14:23.381705 containerd[1473]: time="2025-05-17T00:14:23.378073482Z" level=info msg="StartContainer for \"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0\"" May 17 00:14:23.414254 systemd[1]: Started cri-containerd-83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0.scope - libcontainer container 83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0. May 17 00:14:23.444761 containerd[1473]: time="2025-05-17T00:14:23.444701575Z" level=info msg="StartContainer for \"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0\" returns successfully" May 17 00:14:23.458956 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:14:23.459206 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:14:23.459284 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 17 00:14:23.466798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:14:23.467038 systemd[1]: cri-containerd-83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0.scope: Deactivated successfully. May 17 00:14:23.495152 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:14:23.504417 containerd[1473]: time="2025-05-17T00:14:23.504342324Z" level=info msg="shim disconnected" id=83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0 namespace=k8s.io May 17 00:14:23.504417 containerd[1473]: time="2025-05-17T00:14:23.504412405Z" level=warning msg="cleaning up after shim disconnected" id=83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0 namespace=k8s.io May 17 00:14:23.504417 containerd[1473]: time="2025-05-17T00:14:23.504423365Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:24.050719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1-rootfs.mount: Deactivated successfully. May 17 00:14:24.370028 containerd[1473]: time="2025-05-17T00:14:24.368535970Z" level=info msg="CreateContainer within sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:14:24.391640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840350374.mount: Deactivated successfully. May 17 00:14:24.395810 containerd[1473]: time="2025-05-17T00:14:24.395756305Z" level=info msg="CreateContainer within sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15\"" May 17 00:14:24.398396 containerd[1473]: time="2025-05-17T00:14:24.398352800Z" level=info msg="StartContainer for \"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15\"" May 17 00:14:24.434233 systemd[1]: Started cri-containerd-a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15.scope - libcontainer container a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15. May 17 00:14:24.475526 containerd[1473]: time="2025-05-17T00:14:24.475462991Z" level=info msg="StartContainer for \"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15\" returns successfully" May 17 00:14:24.484124 systemd[1]: cri-containerd-a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15.scope: Deactivated successfully. May 17 00:14:24.524992 containerd[1473]: time="2025-05-17T00:14:24.524856515Z" level=info msg="shim disconnected" id=a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15 namespace=k8s.io May 17 00:14:24.524992 containerd[1473]: time="2025-05-17T00:14:24.524923277Z" level=warning msg="cleaning up after shim disconnected" id=a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15 namespace=k8s.io May 17 00:14:24.524992 containerd[1473]: time="2025-05-17T00:14:24.524936157Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:25.049893 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15-rootfs.mount: Deactivated successfully. May 17 00:14:25.375223 containerd[1473]: time="2025-05-17T00:14:25.375134652Z" level=info msg="CreateContainer within sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:14:25.405317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount854672678.mount: Deactivated successfully. May 17 00:14:25.420010 containerd[1473]: time="2025-05-17T00:14:25.419000363Z" level=info msg="CreateContainer within sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77\"" May 17 00:14:25.421187 containerd[1473]: time="2025-05-17T00:14:25.421135849Z" level=info msg="StartContainer for \"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77\"" May 17 00:14:25.483257 systemd[1]: Started cri-containerd-60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77.scope - libcontainer container 60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77. May 17 00:14:25.539287 systemd[1]: cri-containerd-60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77.scope: Deactivated successfully. May 17 00:14:25.545175 containerd[1473]: time="2025-05-17T00:14:25.544127754Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8905af95_e457_422d_8261_d744722d97d0.slice/cri-containerd-60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77.scope/memory.events\": no such file or directory" May 17 00:14:25.547316 containerd[1473]: time="2025-05-17T00:14:25.547037177Z" level=info msg="StartContainer for \"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77\" returns successfully" May 17 00:14:25.560547 containerd[1473]: time="2025-05-17T00:14:25.559652811Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:25.563024 containerd[1473]: time="2025-05-17T00:14:25.562811799Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 17 00:14:25.565290 containerd[1473]: time="2025-05-17T00:14:25.565226411Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:14:25.568557 containerd[1473]: time="2025-05-17T00:14:25.568510443Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.53732375s" May 17 00:14:25.572155 containerd[1473]: time="2025-05-17T00:14:25.568774888Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 17 00:14:25.614620 containerd[1473]: time="2025-05-17T00:14:25.614573641Z" level=info msg="CreateContainer within sandbox \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:14:25.624051 containerd[1473]: time="2025-05-17T00:14:25.623730719Z" level=info msg="shim disconnected" id=60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77 namespace=k8s.io May 17 00:14:25.624051 containerd[1473]: time="2025-05-17T00:14:25.623898843Z" level=warning msg="cleaning up after shim disconnected" id=60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77 namespace=k8s.io May 17 00:14:25.624051 containerd[1473]: time="2025-05-17T00:14:25.623908683Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:14:25.635264 containerd[1473]: time="2025-05-17T00:14:25.634910001Z" level=info msg="CreateContainer within sandbox \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\"" May 17 00:14:25.637183 containerd[1473]: time="2025-05-17T00:14:25.636324912Z" level=info msg="StartContainer for \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\"" May 17 00:14:25.669248 systemd[1]: Started cri-containerd-d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946.scope - libcontainer container d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946. May 17 00:14:25.697791 containerd[1473]: time="2025-05-17T00:14:25.697739563Z" level=info msg="StartContainer for \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\" returns successfully" May 17 00:14:26.382369 containerd[1473]: time="2025-05-17T00:14:26.382319070Z" level=info msg="CreateContainer within sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:14:26.402221 containerd[1473]: time="2025-05-17T00:14:26.402055188Z" level=info msg="CreateContainer within sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\"" May 17 00:14:26.402734 containerd[1473]: time="2025-05-17T00:14:26.402699322Z" level=info msg="StartContainer for \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\"" May 17 00:14:26.461237 systemd[1]: Started cri-containerd-039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28.scope - libcontainer container 039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28. May 17 00:14:26.497526 kubelet[2656]: I0517 00:14:26.497409 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-5tglz" podStartSLOduration=2.187148645 podStartE2EDuration="9.497388822s" podCreationTimestamp="2025-05-17 00:14:17 +0000 UTC" firstStartedPulling="2025-05-17 00:14:18.260456473 +0000 UTC m=+6.114530965" lastFinishedPulling="2025-05-17 00:14:25.57069665 +0000 UTC m=+13.424771142" observedRunningTime="2025-05-17 00:14:26.427460271 +0000 UTC m=+14.281534763" watchObservedRunningTime="2025-05-17 00:14:26.497388822 +0000 UTC m=+14.351463314" May 17 00:14:26.532272 containerd[1473]: time="2025-05-17T00:14:26.532219914Z" level=info msg="StartContainer for \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\" returns successfully" May 17 00:14:26.754460 kubelet[2656]: I0517 00:14:26.753664 2656 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:14:26.805455 systemd[1]: Created slice kubepods-burstable-pod4d3d7230_33d9_4362_ae96_e802860dfd09.slice - libcontainer container kubepods-burstable-pod4d3d7230_33d9_4362_ae96_e802860dfd09.slice. May 17 00:14:26.822888 systemd[1]: Created slice kubepods-burstable-pod5f90d0f9_48aa_460e_ab4a_4b6009861b8a.slice - libcontainer container kubepods-burstable-pod5f90d0f9_48aa_460e_ab4a_4b6009861b8a.slice. May 17 00:14:26.875430 kubelet[2656]: I0517 00:14:26.875286 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d3d7230-33d9-4362-ae96-e802860dfd09-config-volume\") pod \"coredns-674b8bbfcf-wgjcd\" (UID: \"4d3d7230-33d9-4362-ae96-e802860dfd09\") " pod="kube-system/coredns-674b8bbfcf-wgjcd" May 17 00:14:26.875430 kubelet[2656]: I0517 00:14:26.875361 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brtbs\" (UniqueName: \"kubernetes.io/projected/4d3d7230-33d9-4362-ae96-e802860dfd09-kube-api-access-brtbs\") pod \"coredns-674b8bbfcf-wgjcd\" (UID: \"4d3d7230-33d9-4362-ae96-e802860dfd09\") " pod="kube-system/coredns-674b8bbfcf-wgjcd" May 17 00:14:26.875763 kubelet[2656]: I0517 00:14:26.875390 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvdwd\" (UniqueName: \"kubernetes.io/projected/5f90d0f9-48aa-460e-ab4a-4b6009861b8a-kube-api-access-dvdwd\") pod \"coredns-674b8bbfcf-qc792\" (UID: \"5f90d0f9-48aa-460e-ab4a-4b6009861b8a\") " pod="kube-system/coredns-674b8bbfcf-qc792" May 17 00:14:26.875763 kubelet[2656]: I0517 00:14:26.875606 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f90d0f9-48aa-460e-ab4a-4b6009861b8a-config-volume\") pod \"coredns-674b8bbfcf-qc792\" (UID: \"5f90d0f9-48aa-460e-ab4a-4b6009861b8a\") " pod="kube-system/coredns-674b8bbfcf-qc792" May 17 00:14:27.117911 containerd[1473]: time="2025-05-17T00:14:27.117222944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wgjcd,Uid:4d3d7230-33d9-4362-ae96-e802860dfd09,Namespace:kube-system,Attempt:0,}" May 17 00:14:27.133850 containerd[1473]: time="2025-05-17T00:14:27.133545434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qc792,Uid:5f90d0f9-48aa-460e-ab4a-4b6009861b8a,Namespace:kube-system,Attempt:0,}" May 17 00:14:27.412648 kubelet[2656]: I0517 00:14:27.412102 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lrlkk" podStartSLOduration=5.3937985600000005 podStartE2EDuration="10.412084307s" podCreationTimestamp="2025-05-17 00:14:17 +0000 UTC" firstStartedPulling="2025-05-17 00:14:18.012579499 +0000 UTC m=+5.866653991" lastFinishedPulling="2025-05-17 00:14:23.030865206 +0000 UTC m=+10.884939738" observedRunningTime="2025-05-17 00:14:27.411951664 +0000 UTC m=+15.266026196" watchObservedRunningTime="2025-05-17 00:14:27.412084307 +0000 UTC m=+15.266158799" May 17 00:14:29.666592 systemd-networkd[1368]: cilium_host: Link UP May 17 00:14:29.669791 systemd-networkd[1368]: cilium_net: Link UP May 17 00:14:29.671617 systemd-networkd[1368]: cilium_net: Gained carrier May 17 00:14:29.673003 systemd-networkd[1368]: cilium_host: Gained carrier May 17 00:14:29.799130 systemd-networkd[1368]: cilium_vxlan: Link UP May 17 00:14:29.800140 systemd-networkd[1368]: cilium_vxlan: Gained carrier May 17 00:14:29.984192 systemd-networkd[1368]: cilium_host: Gained IPv6LL May 17 00:14:30.110013 kernel: NET: Registered PF_ALG protocol family May 17 00:14:30.256167 systemd-networkd[1368]: cilium_net: Gained IPv6LL May 17 00:14:30.838066 systemd-networkd[1368]: lxc_health: Link UP May 17 00:14:30.845254 systemd-networkd[1368]: lxc_health: Gained carrier May 17 00:14:31.187420 systemd-networkd[1368]: lxcaeff9b8b06be: Link UP May 17 00:14:31.193365 kernel: eth0: renamed from tmp54e40 May 17 00:14:31.200013 systemd-networkd[1368]: lxcaeff9b8b06be: Gained carrier May 17 00:14:31.216241 systemd-networkd[1368]: lxc31d35c2f56cf: Link UP May 17 00:14:31.219016 systemd-networkd[1368]: cilium_vxlan: Gained IPv6LL May 17 00:14:31.222053 kernel: eth0: renamed from tmp51d66 May 17 00:14:31.227247 systemd-networkd[1368]: lxc31d35c2f56cf: Gained carrier May 17 00:14:32.496174 systemd-networkd[1368]: lxcaeff9b8b06be: Gained IPv6LL May 17 00:14:32.560300 systemd-networkd[1368]: lxc_health: Gained IPv6LL May 17 00:14:33.072219 systemd-networkd[1368]: lxc31d35c2f56cf: Gained IPv6LL May 17 00:14:35.420620 containerd[1473]: time="2025-05-17T00:14:35.419709109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:35.420620 containerd[1473]: time="2025-05-17T00:14:35.420243283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:35.421070 containerd[1473]: time="2025-05-17T00:14:35.420626453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:35.421102 containerd[1473]: time="2025-05-17T00:14:35.421039743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:35.451331 systemd[1]: Started cri-containerd-51d662dbd42662580d4dd3a70f40a2a812287a487fb22c158fb214375c85eaf4.scope - libcontainer container 51d662dbd42662580d4dd3a70f40a2a812287a487fb22c158fb214375c85eaf4. May 17 00:14:35.475229 containerd[1473]: time="2025-05-17T00:14:35.474549819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:14:35.475389 containerd[1473]: time="2025-05-17T00:14:35.475188155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:14:35.475389 containerd[1473]: time="2025-05-17T00:14:35.475202516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:35.475463 containerd[1473]: time="2025-05-17T00:14:35.475381160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:14:35.505233 systemd[1]: Started cri-containerd-54e40f1347987c0866fc67691d00c6c67dd64870d3ebd6da1b344dfd156eccc9.scope - libcontainer container 54e40f1347987c0866fc67691d00c6c67dd64870d3ebd6da1b344dfd156eccc9. May 17 00:14:35.531647 containerd[1473]: time="2025-05-17T00:14:35.530132148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-qc792,Uid:5f90d0f9-48aa-460e-ab4a-4b6009861b8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"51d662dbd42662580d4dd3a70f40a2a812287a487fb22c158fb214375c85eaf4\"" May 17 00:14:35.540191 containerd[1473]: time="2025-05-17T00:14:35.540136168Z" level=info msg="CreateContainer within sandbox \"51d662dbd42662580d4dd3a70f40a2a812287a487fb22c158fb214375c85eaf4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:14:35.560765 containerd[1473]: time="2025-05-17T00:14:35.560695264Z" level=info msg="CreateContainer within sandbox \"51d662dbd42662580d4dd3a70f40a2a812287a487fb22c158fb214375c85eaf4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62369b0f9f9b5c0ef146bf860e4ba6cacc87bb29b86fc9a5b48d1e80dcdb48d5\"" May 17 00:14:35.564123 containerd[1473]: time="2025-05-17T00:14:35.562353468Z" level=info msg="StartContainer for \"62369b0f9f9b5c0ef146bf860e4ba6cacc87bb29b86fc9a5b48d1e80dcdb48d5\"" May 17 00:14:35.575727 containerd[1473]: time="2025-05-17T00:14:35.575522611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wgjcd,Uid:4d3d7230-33d9-4362-ae96-e802860dfd09,Namespace:kube-system,Attempt:0,} returns sandbox id \"54e40f1347987c0866fc67691d00c6c67dd64870d3ebd6da1b344dfd156eccc9\"" May 17 00:14:35.585238 containerd[1473]: time="2025-05-17T00:14:35.585044939Z" level=info msg="CreateContainer within sandbox \"54e40f1347987c0866fc67691d00c6c67dd64870d3ebd6da1b344dfd156eccc9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:14:35.602156 containerd[1473]: time="2025-05-17T00:14:35.600877032Z" level=info msg="CreateContainer within sandbox \"54e40f1347987c0866fc67691d00c6c67dd64870d3ebd6da1b344dfd156eccc9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"351d55da9587cffd12022551574fc195640dea76039b07a39593056c01167e8a\"" May 17 00:14:35.604067 containerd[1473]: time="2025-05-17T00:14:35.603997473Z" level=info msg="StartContainer for \"351d55da9587cffd12022551574fc195640dea76039b07a39593056c01167e8a\"" May 17 00:14:35.624367 systemd[1]: Started cri-containerd-62369b0f9f9b5c0ef146bf860e4ba6cacc87bb29b86fc9a5b48d1e80dcdb48d5.scope - libcontainer container 62369b0f9f9b5c0ef146bf860e4ba6cacc87bb29b86fc9a5b48d1e80dcdb48d5. May 17 00:14:35.651225 systemd[1]: Started cri-containerd-351d55da9587cffd12022551574fc195640dea76039b07a39593056c01167e8a.scope - libcontainer container 351d55da9587cffd12022551574fc195640dea76039b07a39593056c01167e8a. May 17 00:14:35.676110 containerd[1473]: time="2025-05-17T00:14:35.675947709Z" level=info msg="StartContainer for \"62369b0f9f9b5c0ef146bf860e4ba6cacc87bb29b86fc9a5b48d1e80dcdb48d5\" returns successfully" May 17 00:14:35.716659 containerd[1473]: time="2025-05-17T00:14:35.716505526Z" level=info msg="StartContainer for \"351d55da9587cffd12022551574fc195640dea76039b07a39593056c01167e8a\" returns successfully" May 17 00:14:36.456149 kubelet[2656]: I0517 00:14:36.456076 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wgjcd" podStartSLOduration=19.456054094 podStartE2EDuration="19.456054094s" podCreationTimestamp="2025-05-17 00:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:36.454763099 +0000 UTC m=+24.308837591" watchObservedRunningTime="2025-05-17 00:14:36.456054094 +0000 UTC m=+24.310128586" May 17 00:14:36.456641 kubelet[2656]: I0517 00:14:36.456180 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-qc792" podStartSLOduration=19.456175217 podStartE2EDuration="19.456175217s" podCreationTimestamp="2025-05-17 00:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:14:36.439446334 +0000 UTC m=+24.293520826" watchObservedRunningTime="2025-05-17 00:14:36.456175217 +0000 UTC m=+24.310249709" May 17 00:14:42.649109 kubelet[2656]: I0517 00:14:42.648922 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:18:50.949253 update_engine[1459]: I20250517 00:18:50.948946 1459 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:18:50.949253 update_engine[1459]: I20250517 00:18:50.949083 1459 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:18:50.954279 update_engine[1459]: I20250517 00:18:50.949756 1459 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:18:50.954279 update_engine[1459]: I20250517 00:18:50.952878 1459 omaha_request_params.cc:62] Current group set to lts May 17 00:18:50.954279 update_engine[1459]: I20250517 00:18:50.953022 1459 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:18:50.954279 update_engine[1459]: I20250517 00:18:50.953032 1459 update_attempter.cc:643] Scheduling an action processor start. May 17 00:18:50.954279 update_engine[1459]: I20250517 00:18:50.953051 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:18:50.954279 update_engine[1459]: I20250517 00:18:50.953087 1459 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:18:50.954279 update_engine[1459]: I20250517 00:18:50.953182 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:18:50.954279 update_engine[1459]: I20250517 00:18:50.953193 1459 omaha_request_action.cc:272] Request: May 17 00:18:50.954279 update_engine[1459]: May 17 00:18:50.954279 update_engine[1459]: May 17 00:18:50.954279 update_engine[1459]: May 17 00:18:50.954279 update_engine[1459]: May 17 00:18:50.954279 update_engine[1459]: May 17 00:18:50.954279 update_engine[1459]: May 17 00:18:50.954279 update_engine[1459]: May 17 00:18:50.954279 update_engine[1459]: May 17 00:18:50.954279 update_engine[1459]: I20250517 00:18:50.953200 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:18:50.954913 update_engine[1459]: I20250517 00:18:50.954864 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:18:50.955717 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:18:50.956182 update_engine[1459]: I20250517 00:18:50.956149 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:18:50.957844 update_engine[1459]: E20250517 00:18:50.957748 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:18:50.958145 update_engine[1459]: I20250517 00:18:50.957876 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:18:52.055636 systemd[1]: Started sshd@7-91.99.12.209:22-139.178.68.195:60480.service - OpenSSH per-connection server daemon (139.178.68.195:60480). May 17 00:18:53.029907 sshd[4063]: Accepted publickey for core from 139.178.68.195 port 60480 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:18:53.031600 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:53.038727 systemd-logind[1458]: New session 8 of user core. May 17 00:18:53.047233 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:18:53.804818 sshd[4063]: pam_unix(sshd:session): session closed for user core May 17 00:18:53.809014 systemd[1]: sshd@7-91.99.12.209:22-139.178.68.195:60480.service: Deactivated successfully. May 17 00:18:53.814303 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:18:53.815788 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. May 17 00:18:53.817862 systemd-logind[1458]: Removed session 8. May 17 00:18:58.986438 systemd[1]: Started sshd@8-91.99.12.209:22-139.178.68.195:60230.service - OpenSSH per-connection server daemon (139.178.68.195:60230). May 17 00:18:59.978600 sshd[4077]: Accepted publickey for core from 139.178.68.195 port 60230 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:18:59.981122 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:18:59.989951 systemd-logind[1458]: New session 9 of user core. May 17 00:18:59.999657 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:19:00.742177 sshd[4077]: pam_unix(sshd:session): session closed for user core May 17 00:19:00.747441 systemd[1]: sshd@8-91.99.12.209:22-139.178.68.195:60230.service: Deactivated successfully. May 17 00:19:00.751196 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:19:00.753137 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. May 17 00:19:00.754791 systemd-logind[1458]: Removed session 9. May 17 00:19:00.850261 update_engine[1459]: I20250517 00:19:00.850067 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:19:00.850761 update_engine[1459]: I20250517 00:19:00.850528 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:19:00.850949 update_engine[1459]: I20250517 00:19:00.850875 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:19:00.852110 update_engine[1459]: E20250517 00:19:00.852035 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:19:00.852245 update_engine[1459]: I20250517 00:19:00.852169 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:19:05.912263 systemd[1]: Started sshd@9-91.99.12.209:22-139.178.68.195:57996.service - OpenSSH per-connection server daemon (139.178.68.195:57996). May 17 00:19:06.903155 sshd[4091]: Accepted publickey for core from 139.178.68.195 port 57996 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:06.905128 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:06.911385 systemd-logind[1458]: New session 10 of user core. May 17 00:19:06.916245 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:19:07.679445 sshd[4091]: pam_unix(sshd:session): session closed for user core May 17 00:19:07.683904 systemd[1]: sshd@9-91.99.12.209:22-139.178.68.195:57996.service: Deactivated successfully. May 17 00:19:07.685811 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:19:07.686865 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. May 17 00:19:07.688046 systemd-logind[1458]: Removed session 10. May 17 00:19:07.854481 systemd[1]: Started sshd@10-91.99.12.209:22-139.178.68.195:58000.service - OpenSSH per-connection server daemon (139.178.68.195:58000). May 17 00:19:08.842005 sshd[4105]: Accepted publickey for core from 139.178.68.195 port 58000 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:08.843941 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:08.855172 systemd-logind[1458]: New session 11 of user core. May 17 00:19:08.860494 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:19:09.640518 sshd[4105]: pam_unix(sshd:session): session closed for user core May 17 00:19:09.644877 systemd[1]: sshd@10-91.99.12.209:22-139.178.68.195:58000.service: Deactivated successfully. May 17 00:19:09.648105 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:19:09.649004 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. May 17 00:19:09.650090 systemd-logind[1458]: Removed session 11. May 17 00:19:09.820768 systemd[1]: Started sshd@11-91.99.12.209:22-139.178.68.195:58010.service - OpenSSH per-connection server daemon (139.178.68.195:58010). May 17 00:19:10.804195 sshd[4116]: Accepted publickey for core from 139.178.68.195 port 58010 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:10.806213 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:10.812114 systemd-logind[1458]: New session 12 of user core. May 17 00:19:10.817319 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:19:10.850085 update_engine[1459]: I20250517 00:19:10.849471 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:19:10.850085 update_engine[1459]: I20250517 00:19:10.849752 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:19:10.850085 update_engine[1459]: I20250517 00:19:10.850025 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:19:10.851349 update_engine[1459]: E20250517 00:19:10.851186 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:19:10.851349 update_engine[1459]: I20250517 00:19:10.851297 1459 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 00:19:11.564287 sshd[4116]: pam_unix(sshd:session): session closed for user core May 17 00:19:11.570352 systemd[1]: sshd@11-91.99.12.209:22-139.178.68.195:58010.service: Deactivated successfully. May 17 00:19:11.572704 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:19:11.573687 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. May 17 00:19:11.574914 systemd-logind[1458]: Removed session 12. May 17 00:19:16.750585 systemd[1]: Started sshd@12-91.99.12.209:22-139.178.68.195:56214.service - OpenSSH per-connection server daemon (139.178.68.195:56214). May 17 00:19:17.743032 sshd[4130]: Accepted publickey for core from 139.178.68.195 port 56214 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:17.744711 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:17.750973 systemd-logind[1458]: New session 13 of user core. May 17 00:19:17.759380 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:19:18.515688 sshd[4130]: pam_unix(sshd:session): session closed for user core May 17 00:19:18.522846 systemd[1]: sshd@12-91.99.12.209:22-139.178.68.195:56214.service: Deactivated successfully. May 17 00:19:18.526655 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:19:18.527772 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. May 17 00:19:18.529962 systemd-logind[1458]: Removed session 13. May 17 00:19:18.688531 systemd[1]: Started sshd@13-91.99.12.209:22-139.178.68.195:56228.service - OpenSSH per-connection server daemon (139.178.68.195:56228). May 17 00:19:19.662853 sshd[4145]: Accepted publickey for core from 139.178.68.195 port 56228 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:19.664733 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:19.671370 systemd-logind[1458]: New session 14 of user core. May 17 00:19:19.675264 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:19:20.465239 sshd[4145]: pam_unix(sshd:session): session closed for user core May 17 00:19:20.470502 systemd[1]: sshd@13-91.99.12.209:22-139.178.68.195:56228.service: Deactivated successfully. May 17 00:19:20.474349 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:19:20.475527 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. May 17 00:19:20.476735 systemd-logind[1458]: Removed session 14. May 17 00:19:20.651055 systemd[1]: Started sshd@14-91.99.12.209:22-139.178.68.195:56244.service - OpenSSH per-connection server daemon (139.178.68.195:56244). May 17 00:19:20.850038 update_engine[1459]: I20250517 00:19:20.849889 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:19:20.850971 update_engine[1459]: I20250517 00:19:20.850678 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:19:20.850971 update_engine[1459]: I20250517 00:19:20.850922 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:19:20.852207 update_engine[1459]: E20250517 00:19:20.851855 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:19:20.852207 update_engine[1459]: I20250517 00:19:20.851956 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:19:20.852207 update_engine[1459]: I20250517 00:19:20.851972 1459 omaha_request_action.cc:617] Omaha request response: May 17 00:19:20.852207 update_engine[1459]: E20250517 00:19:20.852144 1459 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 00:19:20.852207 update_engine[1459]: I20250517 00:19:20.852175 1459 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 00:19:20.852207 update_engine[1459]: I20250517 00:19:20.852186 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:19:20.852207 update_engine[1459]: I20250517 00:19:20.852196 1459 update_attempter.cc:306] Processing Done. May 17 00:19:20.852207 update_engine[1459]: E20250517 00:19:20.852218 1459 update_attempter.cc:619] Update failed. May 17 00:19:20.852207 update_engine[1459]: I20250517 00:19:20.852231 1459 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 00:19:20.852207 update_engine[1459]: I20250517 00:19:20.852242 1459 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 00:19:20.852207 update_engine[1459]: I20250517 00:19:20.852253 1459 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 00:19:20.852851 update_engine[1459]: I20250517 00:19:20.852362 1459 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:19:20.852851 update_engine[1459]: I20250517 00:19:20.852401 1459 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:19:20.852851 update_engine[1459]: I20250517 00:19:20.852412 1459 omaha_request_action.cc:272] Request: May 17 00:19:20.852851 update_engine[1459]: May 17 00:19:20.852851 update_engine[1459]: May 17 00:19:20.852851 update_engine[1459]: May 17 00:19:20.852851 update_engine[1459]: May 17 00:19:20.852851 update_engine[1459]: May 17 00:19:20.852851 update_engine[1459]: May 17 00:19:20.852851 update_engine[1459]: I20250517 00:19:20.852424 1459 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:19:20.852851 update_engine[1459]: I20250517 00:19:20.852721 1459 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:19:20.853870 update_engine[1459]: I20250517 00:19:20.853004 1459 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:19:20.854013 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 00:19:20.854618 locksmithd[1490]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 00:19:20.854665 update_engine[1459]: E20250517 00:19:20.854081 1459 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:19:20.854665 update_engine[1459]: I20250517 00:19:20.854171 1459 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:19:20.854665 update_engine[1459]: I20250517 00:19:20.854187 1459 omaha_request_action.cc:617] Omaha request response: May 17 00:19:20.854665 update_engine[1459]: I20250517 00:19:20.854199 1459 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:19:20.854665 update_engine[1459]: I20250517 00:19:20.854209 1459 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:19:20.854665 update_engine[1459]: I20250517 00:19:20.854218 1459 update_attempter.cc:306] Processing Done. May 17 00:19:20.854665 update_engine[1459]: I20250517 00:19:20.854231 1459 update_attempter.cc:310] Error event sent. May 17 00:19:20.854665 update_engine[1459]: I20250517 00:19:20.854247 1459 update_check_scheduler.cc:74] Next update check in 45m13s May 17 00:19:21.637691 sshd[4156]: Accepted publickey for core from 139.178.68.195 port 56244 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:21.640259 sshd[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:21.646136 systemd-logind[1458]: New session 15 of user core. May 17 00:19:21.652295 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:19:23.400882 sshd[4156]: pam_unix(sshd:session): session closed for user core May 17 00:19:23.406827 systemd[1]: sshd@14-91.99.12.209:22-139.178.68.195:56244.service: Deactivated successfully. May 17 00:19:23.411212 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:19:23.412179 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. May 17 00:19:23.413198 systemd-logind[1458]: Removed session 15. May 17 00:19:23.579642 systemd[1]: Started sshd@15-91.99.12.209:22-139.178.68.195:56246.service - OpenSSH per-connection server daemon (139.178.68.195:56246). May 17 00:19:24.573794 sshd[4174]: Accepted publickey for core from 139.178.68.195 port 56246 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:24.576321 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:24.583415 systemd-logind[1458]: New session 16 of user core. May 17 00:19:24.587344 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:19:25.474751 sshd[4174]: pam_unix(sshd:session): session closed for user core May 17 00:19:25.480842 systemd[1]: sshd@15-91.99.12.209:22-139.178.68.195:56246.service: Deactivated successfully. May 17 00:19:25.484097 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:19:25.485368 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. May 17 00:19:25.486331 systemd-logind[1458]: Removed session 16. May 17 00:19:25.654429 systemd[1]: Started sshd@16-91.99.12.209:22-139.178.68.195:56098.service - OpenSSH per-connection server daemon (139.178.68.195:56098). May 17 00:19:26.644486 sshd[4186]: Accepted publickey for core from 139.178.68.195 port 56098 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:26.646461 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:26.652930 systemd-logind[1458]: New session 17 of user core. May 17 00:19:26.657426 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:19:27.403391 sshd[4186]: pam_unix(sshd:session): session closed for user core May 17 00:19:27.408734 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. May 17 00:19:27.409199 systemd[1]: sshd@16-91.99.12.209:22-139.178.68.195:56098.service: Deactivated successfully. May 17 00:19:27.411773 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:19:27.413751 systemd-logind[1458]: Removed session 17. May 17 00:19:32.576501 systemd[1]: Started sshd@17-91.99.12.209:22-139.178.68.195:56112.service - OpenSSH per-connection server daemon (139.178.68.195:56112). May 17 00:19:33.550518 sshd[4201]: Accepted publickey for core from 139.178.68.195 port 56112 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:33.553214 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:33.559363 systemd-logind[1458]: New session 18 of user core. May 17 00:19:33.564918 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:19:34.308368 sshd[4201]: pam_unix(sshd:session): session closed for user core May 17 00:19:34.313569 systemd[1]: sshd@17-91.99.12.209:22-139.178.68.195:56112.service: Deactivated successfully. May 17 00:19:34.316402 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:19:34.318264 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. May 17 00:19:34.320336 systemd-logind[1458]: Removed session 18. May 17 00:19:39.486423 systemd[1]: Started sshd@18-91.99.12.209:22-139.178.68.195:42562.service - OpenSSH per-connection server daemon (139.178.68.195:42562). May 17 00:19:40.460679 sshd[4215]: Accepted publickey for core from 139.178.68.195 port 42562 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:40.463539 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:40.468821 systemd-logind[1458]: New session 19 of user core. May 17 00:19:40.473362 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:19:41.214602 sshd[4215]: pam_unix(sshd:session): session closed for user core May 17 00:19:41.220862 systemd[1]: sshd@18-91.99.12.209:22-139.178.68.195:42562.service: Deactivated successfully. May 17 00:19:41.224208 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:19:41.226370 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. May 17 00:19:41.227596 systemd-logind[1458]: Removed session 19. May 17 00:19:41.399108 systemd[1]: Started sshd@19-91.99.12.209:22-139.178.68.195:42564.service - OpenSSH per-connection server daemon (139.178.68.195:42564). May 17 00:19:42.406204 sshd[4228]: Accepted publickey for core from 139.178.68.195 port 42564 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:42.405090 sshd[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:42.419493 systemd-logind[1458]: New session 20 of user core. May 17 00:19:42.425408 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:19:44.837443 containerd[1473]: time="2025-05-17T00:19:44.837390695Z" level=info msg="StopContainer for \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\" with timeout 30 (s)" May 17 00:19:44.838884 containerd[1473]: time="2025-05-17T00:19:44.838620934Z" level=info msg="Stop container \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\" with signal terminated" May 17 00:19:44.851355 containerd[1473]: time="2025-05-17T00:19:44.851295293Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:19:44.855603 systemd[1]: cri-containerd-d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946.scope: Deactivated successfully. May 17 00:19:44.866877 containerd[1473]: time="2025-05-17T00:19:44.866738339Z" level=info msg="StopContainer for \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\" with timeout 2 (s)" May 17 00:19:44.867345 containerd[1473]: time="2025-05-17T00:19:44.867292836Z" level=info msg="Stop container \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\" with signal terminated" May 17 00:19:44.881315 systemd-networkd[1368]: lxc_health: Link DOWN May 17 00:19:44.881323 systemd-networkd[1368]: lxc_health: Lost carrier May 17 00:19:44.912204 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946-rootfs.mount: Deactivated successfully. May 17 00:19:44.913153 systemd[1]: cri-containerd-039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28.scope: Deactivated successfully. May 17 00:19:44.913431 systemd[1]: cri-containerd-039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28.scope: Consumed 8.320s CPU time. May 17 00:19:44.926643 containerd[1473]: time="2025-05-17T00:19:44.926569503Z" level=info msg="shim disconnected" id=d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946 namespace=k8s.io May 17 00:19:44.926643 containerd[1473]: time="2025-05-17T00:19:44.926637425Z" level=warning msg="cleaning up after shim disconnected" id=d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946 namespace=k8s.io May 17 00:19:44.926643 containerd[1473]: time="2025-05-17T00:19:44.926651105Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:44.950476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28-rootfs.mount: Deactivated successfully. May 17 00:19:44.954650 containerd[1473]: time="2025-05-17T00:19:44.954535703Z" level=info msg="shim disconnected" id=039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28 namespace=k8s.io May 17 00:19:44.954650 containerd[1473]: time="2025-05-17T00:19:44.954691068Z" level=warning msg="cleaning up after shim disconnected" id=039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28 namespace=k8s.io May 17 00:19:44.954650 containerd[1473]: time="2025-05-17T00:19:44.954714269Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:44.965147 containerd[1473]: time="2025-05-17T00:19:44.964950111Z" level=info msg="StopContainer for \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\" returns successfully" May 17 00:19:44.966224 containerd[1473]: time="2025-05-17T00:19:44.966130588Z" level=info msg="StopPodSandbox for \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\"" May 17 00:19:44.966224 containerd[1473]: time="2025-05-17T00:19:44.966185910Z" level=info msg="Container to stop \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:19:44.969771 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f-shm.mount: Deactivated successfully. May 17 00:19:44.981298 systemd[1]: cri-containerd-a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f.scope: Deactivated successfully. May 17 00:19:44.988031 containerd[1473]: time="2025-05-17T00:19:44.987852752Z" level=info msg="StopContainer for \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\" returns successfully" May 17 00:19:44.988451 containerd[1473]: time="2025-05-17T00:19:44.988412650Z" level=info msg="StopPodSandbox for \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\"" May 17 00:19:44.988574 containerd[1473]: time="2025-05-17T00:19:44.988555454Z" level=info msg="Container to stop \"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:19:44.988677 containerd[1473]: time="2025-05-17T00:19:44.988659978Z" level=info msg="Container to stop \"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:19:44.988751 containerd[1473]: time="2025-05-17T00:19:44.988737420Z" level=info msg="Container to stop \"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:19:44.988919 containerd[1473]: time="2025-05-17T00:19:44.988790582Z" level=info msg="Container to stop \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:19:44.988919 containerd[1473]: time="2025-05-17T00:19:44.988806862Z" level=info msg="Container to stop \"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:19:44.995339 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6-shm.mount: Deactivated successfully. May 17 00:19:44.997770 systemd[1]: cri-containerd-9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6.scope: Deactivated successfully. May 17 00:19:45.026961 containerd[1473]: time="2025-05-17T00:19:45.026658695Z" level=info msg="shim disconnected" id=a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f namespace=k8s.io May 17 00:19:45.026961 containerd[1473]: time="2025-05-17T00:19:45.026903342Z" level=warning msg="cleaning up after shim disconnected" id=a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f namespace=k8s.io May 17 00:19:45.027576 containerd[1473]: time="2025-05-17T00:19:45.026918183Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:45.036377 containerd[1473]: time="2025-05-17T00:19:45.036134873Z" level=info msg="shim disconnected" id=9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6 namespace=k8s.io May 17 00:19:45.036377 containerd[1473]: time="2025-05-17T00:19:45.036204715Z" level=warning msg="cleaning up after shim disconnected" id=9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6 namespace=k8s.io May 17 00:19:45.036377 containerd[1473]: time="2025-05-17T00:19:45.036213356Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:45.051551 containerd[1473]: time="2025-05-17T00:19:45.051253309Z" level=info msg="TearDown network for sandbox \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\" successfully" May 17 00:19:45.051551 containerd[1473]: time="2025-05-17T00:19:45.051336672Z" level=info msg="StopPodSandbox for \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\" returns successfully" May 17 00:19:45.058301 containerd[1473]: time="2025-05-17T00:19:45.058250570Z" level=info msg="TearDown network for sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" successfully" May 17 00:19:45.058555 containerd[1473]: time="2025-05-17T00:19:45.058373814Z" level=info msg="StopPodSandbox for \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" returns successfully" May 17 00:19:45.214768 kubelet[2656]: I0517 00:19:45.212928 2656 scope.go:117] "RemoveContainer" containerID="d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946" May 17 00:19:45.218536 containerd[1473]: time="2025-05-17T00:19:45.218092086Z" level=info msg="RemoveContainer for \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\"" May 17 00:19:45.219274 kubelet[2656]: I0517 00:19:45.219236 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr8b2\" (UniqueName: \"kubernetes.io/projected/2079f51c-d281-4f28-849b-b7e64c8f6c66-kube-api-access-dr8b2\") pod \"2079f51c-d281-4f28-849b-b7e64c8f6c66\" (UID: \"2079f51c-d281-4f28-849b-b7e64c8f6c66\") " May 17 00:19:45.219566 kubelet[2656]: I0517 00:19:45.219353 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-host-proc-sys-kernel\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.219566 kubelet[2656]: I0517 00:19:45.219376 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-hostproc\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.219566 kubelet[2656]: I0517 00:19:45.219392 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-xtables-lock\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.219566 kubelet[2656]: I0517 00:19:45.219413 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g68np\" (UniqueName: \"kubernetes.io/projected/8905af95-e457-422d-8261-d744722d97d0-kube-api-access-g68np\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.219566 kubelet[2656]: I0517 00:19:45.219433 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-host-proc-sys-net\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.219566 kubelet[2656]: I0517 00:19:45.219450 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cni-path\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.219746 kubelet[2656]: I0517 00:19:45.219471 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8905af95-e457-422d-8261-d744722d97d0-clustermesh-secrets\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.219746 kubelet[2656]: I0517 00:19:45.219491 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8905af95-e457-422d-8261-d744722d97d0-hubble-tls\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.219746 kubelet[2656]: I0517 00:19:45.219638 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:19:45.221017 kubelet[2656]: I0517 00:19:45.219844 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cilium-run\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.221017 kubelet[2656]: I0517 00:19:45.219873 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-bpf-maps\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.221017 kubelet[2656]: I0517 00:19:45.219888 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cilium-cgroup\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.221017 kubelet[2656]: I0517 00:19:45.219911 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8905af95-e457-422d-8261-d744722d97d0-cilium-config-path\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.221017 kubelet[2656]: I0517 00:19:45.219930 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-lib-modules\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.221017 kubelet[2656]: I0517 00:19:45.219955 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-etc-cni-netd\") pod \"8905af95-e457-422d-8261-d744722d97d0\" (UID: \"8905af95-e457-422d-8261-d744722d97d0\") " May 17 00:19:45.221277 kubelet[2656]: I0517 00:19:45.219972 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2079f51c-d281-4f28-849b-b7e64c8f6c66-cilium-config-path\") pod \"2079f51c-d281-4f28-849b-b7e64c8f6c66\" (UID: \"2079f51c-d281-4f28-849b-b7e64c8f6c66\") " May 17 00:19:45.221277 kubelet[2656]: I0517 00:19:45.220039 2656 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-xtables-lock\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.221529 kubelet[2656]: I0517 00:19:45.221466 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:19:45.221716 kubelet[2656]: I0517 00:19:45.221698 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-hostproc" (OuterVolumeSpecName: "hostproc") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:19:45.224156 kubelet[2656]: I0517 00:19:45.224115 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2079f51c-d281-4f28-849b-b7e64c8f6c66-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2079f51c-d281-4f28-849b-b7e64c8f6c66" (UID: "2079f51c-d281-4f28-849b-b7e64c8f6c66"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:19:45.225565 containerd[1473]: time="2025-05-17T00:19:45.225517120Z" level=info msg="RemoveContainer for \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\" returns successfully" May 17 00:19:45.225835 kubelet[2656]: I0517 00:19:45.225798 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:19:45.226379 kubelet[2656]: I0517 00:19:45.225960 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cni-path" (OuterVolumeSpecName: "cni-path") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:19:45.228212 kubelet[2656]: I0517 00:19:45.228172 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:19:45.228311 kubelet[2656]: I0517 00:19:45.228232 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:19:45.228311 kubelet[2656]: I0517 00:19:45.228249 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:19:45.228532 kubelet[2656]: I0517 00:19:45.228498 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:19:45.228578 kubelet[2656]: I0517 00:19:45.228532 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:19:45.230145 kubelet[2656]: I0517 00:19:45.230108 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2079f51c-d281-4f28-849b-b7e64c8f6c66-kube-api-access-dr8b2" (OuterVolumeSpecName: "kube-api-access-dr8b2") pod "2079f51c-d281-4f28-849b-b7e64c8f6c66" (UID: "2079f51c-d281-4f28-849b-b7e64c8f6c66"). InnerVolumeSpecName "kube-api-access-dr8b2". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:19:45.231286 kubelet[2656]: I0517 00:19:45.231230 2656 scope.go:117] "RemoveContainer" containerID="d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946" May 17 00:19:45.232289 containerd[1473]: time="2025-05-17T00:19:45.232234251Z" level=error msg="ContainerStatus for \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\": not found" May 17 00:19:45.233991 kubelet[2656]: E0517 00:19:45.233526 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\": not found" containerID="d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946" May 17 00:19:45.233991 kubelet[2656]: I0517 00:19:45.233567 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946"} err="failed to get container status \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0e8602021e5009d2e3957e4ce1cce70439dcb83c7da5a9e6a14c5f384b14946\": not found" May 17 00:19:45.233991 kubelet[2656]: I0517 00:19:45.233722 2656 scope.go:117] "RemoveContainer" containerID="039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28" May 17 00:19:45.236472 kubelet[2656]: I0517 00:19:45.236435 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8905af95-e457-422d-8261-d744722d97d0-kube-api-access-g68np" (OuterVolumeSpecName: "kube-api-access-g68np") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "kube-api-access-g68np". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:19:45.236676 kubelet[2656]: I0517 00:19:45.236658 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8905af95-e457-422d-8261-d744722d97d0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:19:45.237516 kubelet[2656]: I0517 00:19:45.237457 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8905af95-e457-422d-8261-d744722d97d0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:19:45.237871 containerd[1473]: time="2025-05-17T00:19:45.237788666Z" level=info msg="RemoveContainer for \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\"" May 17 00:19:45.238160 kubelet[2656]: I0517 00:19:45.237734 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8905af95-e457-422d-8261-d744722d97d0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8905af95-e457-422d-8261-d744722d97d0" (UID: "8905af95-e457-422d-8261-d744722d97d0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:19:45.241831 containerd[1473]: time="2025-05-17T00:19:45.241776992Z" level=info msg="RemoveContainer for \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\" returns successfully" May 17 00:19:45.242348 kubelet[2656]: I0517 00:19:45.242226 2656 scope.go:117] "RemoveContainer" containerID="60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77" May 17 00:19:45.244133 containerd[1473]: time="2025-05-17T00:19:45.243946820Z" level=info msg="RemoveContainer for \"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77\"" May 17 00:19:45.248639 containerd[1473]: time="2025-05-17T00:19:45.248469603Z" level=info msg="RemoveContainer for \"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77\" returns successfully" May 17 00:19:45.249286 kubelet[2656]: I0517 00:19:45.249124 2656 scope.go:117] "RemoveContainer" containerID="a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15" May 17 00:19:45.250932 containerd[1473]: time="2025-05-17T00:19:45.250859958Z" level=info msg="RemoveContainer for \"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15\"" May 17 00:19:45.254069 containerd[1473]: time="2025-05-17T00:19:45.254007417Z" level=info msg="RemoveContainer for \"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15\" returns successfully" May 17 00:19:45.254559 kubelet[2656]: I0517 00:19:45.254460 2656 scope.go:117] "RemoveContainer" containerID="83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0" May 17 00:19:45.256090 containerd[1473]: time="2025-05-17T00:19:45.255864356Z" level=info msg="RemoveContainer for \"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0\"" May 17 00:19:45.258886 containerd[1473]: time="2025-05-17T00:19:45.258812649Z" level=info msg="RemoveContainer for \"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0\" returns successfully" May 17 00:19:45.259254 kubelet[2656]: I0517 00:19:45.259115 2656 scope.go:117] "RemoveContainer" containerID="2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1" May 17 00:19:45.261379 containerd[1473]: time="2025-05-17T00:19:45.260959516Z" level=info msg="RemoveContainer for \"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1\"" May 17 00:19:45.264801 containerd[1473]: time="2025-05-17T00:19:45.264666873Z" level=info msg="RemoveContainer for \"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1\" returns successfully" May 17 00:19:45.265037 kubelet[2656]: I0517 00:19:45.264934 2656 scope.go:117] "RemoveContainer" containerID="039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28" May 17 00:19:45.265529 containerd[1473]: time="2025-05-17T00:19:45.265495459Z" level=error msg="ContainerStatus for \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\": not found" May 17 00:19:45.265842 kubelet[2656]: E0517 00:19:45.265724 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\": not found" containerID="039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28" May 17 00:19:45.265842 kubelet[2656]: I0517 00:19:45.265756 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28"} err="failed to get container status \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\": rpc error: code = NotFound desc = an error occurred when try to find container \"039cd2b162d746cb8a014639586d8095aa7176081bddbddb3e016513d429ca28\": not found" May 17 00:19:45.265842 kubelet[2656]: I0517 00:19:45.265777 2656 scope.go:117] "RemoveContainer" containerID="60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77" May 17 00:19:45.266305 containerd[1473]: time="2025-05-17T00:19:45.266214162Z" level=error msg="ContainerStatus for \"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77\": not found" May 17 00:19:45.266431 kubelet[2656]: E0517 00:19:45.266401 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77\": not found" containerID="60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77" May 17 00:19:45.266481 kubelet[2656]: I0517 00:19:45.266434 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77"} err="failed to get container status \"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77\": rpc error: code = NotFound desc = an error occurred when try to find container \"60e1425992b5d8cf0e996e9f91ea16882ab4902e317e3eff9b9917203d40bd77\": not found" May 17 00:19:45.266481 kubelet[2656]: I0517 00:19:45.266455 2656 scope.go:117] "RemoveContainer" containerID="a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15" May 17 00:19:45.266865 containerd[1473]: time="2025-05-17T00:19:45.266747739Z" level=error msg="ContainerStatus for \"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15\": not found" May 17 00:19:45.266959 kubelet[2656]: E0517 00:19:45.266909 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15\": not found" containerID="a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15" May 17 00:19:45.266959 kubelet[2656]: I0517 00:19:45.266929 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15"} err="failed to get container status \"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15\": rpc error: code = NotFound desc = an error occurred when try to find container \"a01c25d2be70cb8cdd07c1f065377f4ae33b0c35888d065487fd7e476b991c15\": not found" May 17 00:19:45.266959 kubelet[2656]: I0517 00:19:45.266942 2656 scope.go:117] "RemoveContainer" containerID="83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0" May 17 00:19:45.267627 containerd[1473]: time="2025-05-17T00:19:45.267271555Z" level=error msg="ContainerStatus for \"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0\": not found" May 17 00:19:45.267701 kubelet[2656]: E0517 00:19:45.267431 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0\": not found" containerID="83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0" May 17 00:19:45.267701 kubelet[2656]: I0517 00:19:45.267519 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0"} err="failed to get container status \"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"83179312764b258ef1c955fe1f7a355cc6a6d8fc91eee1d846f32952c17f55d0\": not found" May 17 00:19:45.267701 kubelet[2656]: I0517 00:19:45.267538 2656 scope.go:117] "RemoveContainer" containerID="2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1" May 17 00:19:45.267780 containerd[1473]: time="2025-05-17T00:19:45.267733930Z" level=error msg="ContainerStatus for \"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1\": not found" May 17 00:19:45.268033 kubelet[2656]: E0517 00:19:45.267903 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1\": not found" containerID="2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1" May 17 00:19:45.268033 kubelet[2656]: I0517 00:19:45.267934 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1"} err="failed to get container status \"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c15c54d822cb93e738c681c675b37d5170371d5c8b4b359cd0111cbdc8cf7f1\": not found" May 17 00:19:45.321189 kubelet[2656]: I0517 00:19:45.320718 2656 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dr8b2\" (UniqueName: \"kubernetes.io/projected/2079f51c-d281-4f28-849b-b7e64c8f6c66-kube-api-access-dr8b2\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321189 kubelet[2656]: I0517 00:19:45.320779 2656 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-host-proc-sys-kernel\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321189 kubelet[2656]: I0517 00:19:45.320836 2656 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-hostproc\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321189 kubelet[2656]: I0517 00:19:45.320869 2656 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g68np\" (UniqueName: \"kubernetes.io/projected/8905af95-e457-422d-8261-d744722d97d0-kube-api-access-g68np\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321189 kubelet[2656]: I0517 00:19:45.320902 2656 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-host-proc-sys-net\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321189 kubelet[2656]: I0517 00:19:45.320940 2656 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cni-path\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321189 kubelet[2656]: I0517 00:19:45.320958 2656 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8905af95-e457-422d-8261-d744722d97d0-clustermesh-secrets\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321189 kubelet[2656]: I0517 00:19:45.321015 2656 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8905af95-e457-422d-8261-d744722d97d0-hubble-tls\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321698 kubelet[2656]: I0517 00:19:45.321040 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cilium-run\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321698 kubelet[2656]: I0517 00:19:45.321054 2656 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-bpf-maps\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321698 kubelet[2656]: I0517 00:19:45.321074 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-cilium-cgroup\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321698 kubelet[2656]: I0517 00:19:45.321089 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8905af95-e457-422d-8261-d744722d97d0-cilium-config-path\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321698 kubelet[2656]: I0517 00:19:45.321110 2656 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-lib-modules\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321698 kubelet[2656]: I0517 00:19:45.321135 2656 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8905af95-e457-422d-8261-d744722d97d0-etc-cni-netd\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.321698 kubelet[2656]: I0517 00:19:45.321149 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2079f51c-d281-4f28-849b-b7e64c8f6c66-cilium-config-path\") on node \"ci-4081-3-3-n-58e6742ed6\" DevicePath \"\"" May 17 00:19:45.520847 systemd[1]: Removed slice kubepods-besteffort-pod2079f51c_d281_4f28_849b_b7e64c8f6c66.slice - libcontainer container kubepods-besteffort-pod2079f51c_d281_4f28_849b_b7e64c8f6c66.slice. May 17 00:19:45.534746 systemd[1]: Removed slice kubepods-burstable-pod8905af95_e457_422d_8261_d744722d97d0.slice - libcontainer container kubepods-burstable-pod8905af95_e457_422d_8261_d744722d97d0.slice. May 17 00:19:45.534863 systemd[1]: kubepods-burstable-pod8905af95_e457_422d_8261_d744722d97d0.slice: Consumed 8.411s CPU time. May 17 00:19:45.823428 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f-rootfs.mount: Deactivated successfully. May 17 00:19:45.823870 systemd[1]: var-lib-kubelet-pods-2079f51c\x2dd281\x2d4f28\x2d849b\x2db7e64c8f6c66-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddr8b2.mount: Deactivated successfully. May 17 00:19:45.824056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6-rootfs.mount: Deactivated successfully. May 17 00:19:45.824168 systemd[1]: var-lib-kubelet-pods-8905af95\x2de457\x2d422d\x2d8261\x2dd744722d97d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg68np.mount: Deactivated successfully. May 17 00:19:45.824263 systemd[1]: var-lib-kubelet-pods-8905af95\x2de457\x2d422d\x2d8261\x2dd744722d97d0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:19:45.824355 systemd[1]: var-lib-kubelet-pods-8905af95\x2de457\x2d422d\x2d8261\x2dd744722d97d0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:19:46.294759 kubelet[2656]: I0517 00:19:46.294418 2656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2079f51c-d281-4f28-849b-b7e64c8f6c66" path="/var/lib/kubelet/pods/2079f51c-d281-4f28-849b-b7e64c8f6c66/volumes" May 17 00:19:46.296765 kubelet[2656]: I0517 00:19:46.296160 2656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8905af95-e457-422d-8261-d744722d97d0" path="/var/lib/kubelet/pods/8905af95-e457-422d-8261-d744722d97d0/volumes" May 17 00:19:46.904684 sshd[4228]: pam_unix(sshd:session): session closed for user core May 17 00:19:46.910505 systemd[1]: sshd@19-91.99.12.209:22-139.178.68.195:42564.service: Deactivated successfully. May 17 00:19:46.913080 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:19:46.913250 systemd[1]: session-20.scope: Consumed 1.214s CPU time. May 17 00:19:46.913993 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. May 17 00:19:46.915968 systemd-logind[1458]: Removed session 20. May 17 00:19:47.079388 systemd[1]: Started sshd@20-91.99.12.209:22-139.178.68.195:53824.service - OpenSSH per-connection server daemon (139.178.68.195:53824). May 17 00:19:47.475291 kubelet[2656]: E0517 00:19:47.475202 2656 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:19:48.061325 sshd[4391]: Accepted publickey for core from 139.178.68.195 port 53824 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:48.064108 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:48.074522 systemd-logind[1458]: New session 21 of user core. May 17 00:19:48.083348 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:19:49.490875 kubelet[2656]: I0517 00:19:49.490802 2656 setters.go:618] "Node became not ready" node="ci-4081-3-3-n-58e6742ed6" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:19:49Z","lastTransitionTime":"2025-05-17T00:19:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:19:49.659006 systemd[1]: Created slice kubepods-burstable-pod5da97bfa_d04f_4ed0_bd79_8a4fbd091b47.slice - libcontainer container kubepods-burstable-pod5da97bfa_d04f_4ed0_bd79_8a4fbd091b47.slice. May 17 00:19:49.752055 kubelet[2656]: I0517 00:19:49.751542 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-lib-modules\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752055 kubelet[2656]: I0517 00:19:49.751632 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps2pk\" (UniqueName: \"kubernetes.io/projected/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-kube-api-access-ps2pk\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752055 kubelet[2656]: I0517 00:19:49.751679 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-bpf-maps\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752055 kubelet[2656]: I0517 00:19:49.751713 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-etc-cni-netd\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752055 kubelet[2656]: I0517 00:19:49.751750 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-cni-path\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752055 kubelet[2656]: I0517 00:19:49.751790 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-clustermesh-secrets\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752693 kubelet[2656]: I0517 00:19:49.751824 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-host-proc-sys-net\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752693 kubelet[2656]: I0517 00:19:49.751858 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-host-proc-sys-kernel\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752693 kubelet[2656]: I0517 00:19:49.751916 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-cilium-cgroup\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752693 kubelet[2656]: I0517 00:19:49.751952 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-cilium-config-path\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752693 kubelet[2656]: I0517 00:19:49.752030 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-xtables-lock\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752860 kubelet[2656]: I0517 00:19:49.752071 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-cilium-ipsec-secrets\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752860 kubelet[2656]: I0517 00:19:49.752104 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-hubble-tls\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752860 kubelet[2656]: I0517 00:19:49.752137 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-cilium-run\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.752860 kubelet[2656]: I0517 00:19:49.752169 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5da97bfa-d04f-4ed0-bd79-8a4fbd091b47-hostproc\") pod \"cilium-fd7g2\" (UID: \"5da97bfa-d04f-4ed0-bd79-8a4fbd091b47\") " pod="kube-system/cilium-fd7g2" May 17 00:19:49.813651 sshd[4391]: pam_unix(sshd:session): session closed for user core May 17 00:19:49.820721 systemd[1]: sshd@20-91.99.12.209:22-139.178.68.195:53824.service: Deactivated successfully. May 17 00:19:49.824516 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:19:49.825954 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. May 17 00:19:49.827475 systemd-logind[1458]: Removed session 21. May 17 00:19:49.962741 containerd[1473]: time="2025-05-17T00:19:49.962660952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fd7g2,Uid:5da97bfa-d04f-4ed0-bd79-8a4fbd091b47,Namespace:kube-system,Attempt:0,}" May 17 00:19:49.991817 containerd[1473]: time="2025-05-17T00:19:49.991633667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:19:49.991817 containerd[1473]: time="2025-05-17T00:19:49.991694389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:19:49.991817 containerd[1473]: time="2025-05-17T00:19:49.991710589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:49.994081 containerd[1473]: time="2025-05-17T00:19:49.991793152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:19:49.995648 systemd[1]: Started sshd@21-91.99.12.209:22-139.178.68.195:53836.service - OpenSSH per-connection server daemon (139.178.68.195:53836). May 17 00:19:50.019246 systemd[1]: Started cri-containerd-bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a.scope - libcontainer container bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a. May 17 00:19:50.045740 containerd[1473]: time="2025-05-17T00:19:50.045697575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fd7g2,Uid:5da97bfa-d04f-4ed0-bd79-8a4fbd091b47,Namespace:kube-system,Attempt:0,} returns sandbox id \"bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a\"" May 17 00:19:50.053939 containerd[1473]: time="2025-05-17T00:19:50.053868273Z" level=info msg="CreateContainer within sandbox \"bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:19:50.067424 containerd[1473]: time="2025-05-17T00:19:50.067352419Z" level=info msg="CreateContainer within sandbox \"bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"df12671c68d05bf6d3285d56c7615db990faf6ef75d73e50a0f40858d12e2391\"" May 17 00:19:50.070386 containerd[1473]: time="2025-05-17T00:19:50.068702022Z" level=info msg="StartContainer for \"df12671c68d05bf6d3285d56c7615db990faf6ef75d73e50a0f40858d12e2391\"" May 17 00:19:50.114307 systemd[1]: Started cri-containerd-df12671c68d05bf6d3285d56c7615db990faf6ef75d73e50a0f40858d12e2391.scope - libcontainer container df12671c68d05bf6d3285d56c7615db990faf6ef75d73e50a0f40858d12e2391. May 17 00:19:50.144157 containerd[1473]: time="2025-05-17T00:19:50.143529266Z" level=info msg="StartContainer for \"df12671c68d05bf6d3285d56c7615db990faf6ef75d73e50a0f40858d12e2391\" returns successfully" May 17 00:19:50.155386 systemd[1]: cri-containerd-df12671c68d05bf6d3285d56c7615db990faf6ef75d73e50a0f40858d12e2391.scope: Deactivated successfully. May 17 00:19:50.194837 containerd[1473]: time="2025-05-17T00:19:50.194743405Z" level=info msg="shim disconnected" id=df12671c68d05bf6d3285d56c7615db990faf6ef75d73e50a0f40858d12e2391 namespace=k8s.io May 17 00:19:50.194837 containerd[1473]: time="2025-05-17T00:19:50.194823007Z" level=warning msg="cleaning up after shim disconnected" id=df12671c68d05bf6d3285d56c7615db990faf6ef75d73e50a0f40858d12e2391 namespace=k8s.io May 17 00:19:50.194837 containerd[1473]: time="2025-05-17T00:19:50.194833168Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:50.250769 containerd[1473]: time="2025-05-17T00:19:50.250707693Z" level=info msg="CreateContainer within sandbox \"bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:19:50.264950 containerd[1473]: time="2025-05-17T00:19:50.264851020Z" level=info msg="CreateContainer within sandbox \"bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4440e02513264702a3e0ddf3936eb64ab40c2c9f5b4ce445ef30df94c8372c8e\"" May 17 00:19:50.266823 containerd[1473]: time="2025-05-17T00:19:50.265864532Z" level=info msg="StartContainer for \"4440e02513264702a3e0ddf3936eb64ab40c2c9f5b4ce445ef30df94c8372c8e\"" May 17 00:19:50.298250 systemd[1]: Started cri-containerd-4440e02513264702a3e0ddf3936eb64ab40c2c9f5b4ce445ef30df94c8372c8e.scope - libcontainer container 4440e02513264702a3e0ddf3936eb64ab40c2c9f5b4ce445ef30df94c8372c8e. May 17 00:19:50.329736 containerd[1473]: time="2025-05-17T00:19:50.329674228Z" level=info msg="StartContainer for \"4440e02513264702a3e0ddf3936eb64ab40c2c9f5b4ce445ef30df94c8372c8e\" returns successfully" May 17 00:19:50.343007 systemd[1]: cri-containerd-4440e02513264702a3e0ddf3936eb64ab40c2c9f5b4ce445ef30df94c8372c8e.scope: Deactivated successfully. May 17 00:19:50.370791 containerd[1473]: time="2025-05-17T00:19:50.370708245Z" level=info msg="shim disconnected" id=4440e02513264702a3e0ddf3936eb64ab40c2c9f5b4ce445ef30df94c8372c8e namespace=k8s.io May 17 00:19:50.370791 containerd[1473]: time="2025-05-17T00:19:50.370782407Z" level=warning msg="cleaning up after shim disconnected" id=4440e02513264702a3e0ddf3936eb64ab40c2c9f5b4ce445ef30df94c8372c8e namespace=k8s.io May 17 00:19:50.370791 containerd[1473]: time="2025-05-17T00:19:50.370799208Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:50.995878 sshd[4421]: Accepted publickey for core from 139.178.68.195 port 53836 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:50.998359 sshd[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:51.005078 systemd-logind[1458]: New session 22 of user core. May 17 00:19:51.014414 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:19:51.257662 containerd[1473]: time="2025-05-17T00:19:51.257540993Z" level=info msg="CreateContainer within sandbox \"bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:19:51.280024 containerd[1473]: time="2025-05-17T00:19:51.279578529Z" level=info msg="CreateContainer within sandbox \"bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c966829545f408bb6c4dded32ca00c88319f045b2a64373b4e9a58a79c90e7f\"" May 17 00:19:51.281271 containerd[1473]: time="2025-05-17T00:19:51.280403195Z" level=info msg="StartContainer for \"4c966829545f408bb6c4dded32ca00c88319f045b2a64373b4e9a58a79c90e7f\"" May 17 00:19:51.314366 systemd[1]: Started cri-containerd-4c966829545f408bb6c4dded32ca00c88319f045b2a64373b4e9a58a79c90e7f.scope - libcontainer container 4c966829545f408bb6c4dded32ca00c88319f045b2a64373b4e9a58a79c90e7f. May 17 00:19:51.348302 containerd[1473]: time="2025-05-17T00:19:51.348122736Z" level=info msg="StartContainer for \"4c966829545f408bb6c4dded32ca00c88319f045b2a64373b4e9a58a79c90e7f\" returns successfully" May 17 00:19:51.349575 systemd[1]: cri-containerd-4c966829545f408bb6c4dded32ca00c88319f045b2a64373b4e9a58a79c90e7f.scope: Deactivated successfully. May 17 00:19:51.382013 containerd[1473]: time="2025-05-17T00:19:51.381703638Z" level=info msg="shim disconnected" id=4c966829545f408bb6c4dded32ca00c88319f045b2a64373b4e9a58a79c90e7f namespace=k8s.io May 17 00:19:51.382013 containerd[1473]: time="2025-05-17T00:19:51.381775000Z" level=warning msg="cleaning up after shim disconnected" id=4c966829545f408bb6c4dded32ca00c88319f045b2a64373b4e9a58a79c90e7f namespace=k8s.io May 17 00:19:51.382013 containerd[1473]: time="2025-05-17T00:19:51.381786921Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:51.686768 sshd[4421]: pam_unix(sshd:session): session closed for user core May 17 00:19:51.692034 systemd[1]: sshd@21-91.99.12.209:22-139.178.68.195:53836.service: Deactivated successfully. May 17 00:19:51.694965 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:19:51.696584 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. May 17 00:19:51.697813 systemd-logind[1458]: Removed session 22. May 17 00:19:51.869136 systemd[1]: Started sshd@22-91.99.12.209:22-139.178.68.195:53838.service - OpenSSH per-connection server daemon (139.178.68.195:53838). May 17 00:19:51.871291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c966829545f408bb6c4dded32ca00c88319f045b2a64373b4e9a58a79c90e7f-rootfs.mount: Deactivated successfully. May 17 00:19:52.261805 containerd[1473]: time="2025-05-17T00:19:52.261749227Z" level=info msg="CreateContainer within sandbox \"bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:19:52.283522 containerd[1473]: time="2025-05-17T00:19:52.283353430Z" level=info msg="CreateContainer within sandbox \"bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0f7f880885a301ac1a88184023909cea3593584c1fe44882f5795f18ddeda790\"" May 17 00:19:52.284436 containerd[1473]: time="2025-05-17T00:19:52.284388423Z" level=info msg="StartContainer for \"0f7f880885a301ac1a88184023909cea3593584c1fe44882f5795f18ddeda790\"" May 17 00:19:52.332224 systemd[1]: Started cri-containerd-0f7f880885a301ac1a88184023909cea3593584c1fe44882f5795f18ddeda790.scope - libcontainer container 0f7f880885a301ac1a88184023909cea3593584c1fe44882f5795f18ddeda790. May 17 00:19:52.361800 systemd[1]: cri-containerd-0f7f880885a301ac1a88184023909cea3593584c1fe44882f5795f18ddeda790.scope: Deactivated successfully. May 17 00:19:52.366003 containerd[1473]: time="2025-05-17T00:19:52.365928763Z" level=info msg="StartContainer for \"0f7f880885a301ac1a88184023909cea3593584c1fe44882f5795f18ddeda790\" returns successfully" May 17 00:19:52.409494 containerd[1473]: time="2025-05-17T00:19:52.409406018Z" level=info msg="shim disconnected" id=0f7f880885a301ac1a88184023909cea3593584c1fe44882f5795f18ddeda790 namespace=k8s.io May 17 00:19:52.409494 containerd[1473]: time="2025-05-17T00:19:52.409485181Z" level=warning msg="cleaning up after shim disconnected" id=0f7f880885a301ac1a88184023909cea3593584c1fe44882f5795f18ddeda790 namespace=k8s.io May 17 00:19:52.410058 containerd[1473]: time="2025-05-17T00:19:52.409509781Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:19:52.477315 kubelet[2656]: E0517 00:19:52.477260 2656 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:19:52.862115 sshd[4641]: Accepted publickey for core from 139.178.68.195 port 53838 ssh2: RSA SHA256:3DH0lUPdHwyQJn3I0ENA7R+xFRfXGfTtJFJ5l1PYReI May 17 00:19:52.864681 sshd[4641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:19:52.872521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f7f880885a301ac1a88184023909cea3593584c1fe44882f5795f18ddeda790-rootfs.mount: Deactivated successfully. May 17 00:19:52.877493 systemd-logind[1458]: New session 23 of user core. May 17 00:19:52.889791 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:19:53.268919 containerd[1473]: time="2025-05-17T00:19:53.268794409Z" level=info msg="CreateContainer within sandbox \"bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:19:53.287107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304763684.mount: Deactivated successfully. May 17 00:19:53.298732 containerd[1473]: time="2025-05-17T00:19:53.298675795Z" level=info msg="CreateContainer within sandbox \"bee5e0ce6946488e91cfacb0cc1a7d01eae568a88414459d145c863731a05b8a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"76b725f4d9f9663c5e2496b72c01c57155c4a5e7e67a94f595b7a69e0ba06624\"" May 17 00:19:53.299633 containerd[1473]: time="2025-05-17T00:19:53.299471860Z" level=info msg="StartContainer for \"76b725f4d9f9663c5e2496b72c01c57155c4a5e7e67a94f595b7a69e0ba06624\"" May 17 00:19:53.336235 systemd[1]: Started cri-containerd-76b725f4d9f9663c5e2496b72c01c57155c4a5e7e67a94f595b7a69e0ba06624.scope - libcontainer container 76b725f4d9f9663c5e2496b72c01c57155c4a5e7e67a94f595b7a69e0ba06624. May 17 00:19:53.372827 containerd[1473]: time="2025-05-17T00:19:53.372777940Z" level=info msg="StartContainer for \"76b725f4d9f9663c5e2496b72c01c57155c4a5e7e67a94f595b7a69e0ba06624\" returns successfully" May 17 00:19:53.752326 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 17 00:19:54.299535 kubelet[2656]: I0517 00:19:54.299470 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fd7g2" podStartSLOduration=5.299445356 podStartE2EDuration="5.299445356s" podCreationTimestamp="2025-05-17 00:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:19:54.296910516 +0000 UTC m=+342.150985048" watchObservedRunningTime="2025-05-17 00:19:54.299445356 +0000 UTC m=+342.153519848" May 17 00:19:55.292594 kubelet[2656]: E0517 00:19:55.291759 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qc792" podUID="5f90d0f9-48aa-460e-ab4a-4b6009861b8a" May 17 00:19:56.828198 systemd-networkd[1368]: lxc_health: Link UP May 17 00:19:56.847453 systemd-networkd[1368]: lxc_health: Gained carrier May 17 00:19:57.290480 kubelet[2656]: E0517 00:19:57.290064 2656 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-qc792" podUID="5f90d0f9-48aa-460e-ab4a-4b6009861b8a" May 17 00:19:57.754194 systemd[1]: run-containerd-runc-k8s.io-76b725f4d9f9663c5e2496b72c01c57155c4a5e7e67a94f595b7a69e0ba06624-runc.wJCAHP.mount: Deactivated successfully. May 17 00:19:58.769256 systemd-networkd[1368]: lxc_health: Gained IPv6LL May 17 00:20:02.122275 systemd[1]: run-containerd-runc-k8s.io-76b725f4d9f9663c5e2496b72c01c57155c4a5e7e67a94f595b7a69e0ba06624-runc.noLymS.mount: Deactivated successfully. May 17 00:20:02.341466 sshd[4641]: pam_unix(sshd:session): session closed for user core May 17 00:20:02.353840 systemd[1]: sshd@22-91.99.12.209:22-139.178.68.195:53838.service: Deactivated successfully. May 17 00:20:02.358740 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:20:02.360576 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. May 17 00:20:02.362334 systemd-logind[1458]: Removed session 23. May 17 00:20:12.319568 containerd[1473]: time="2025-05-17T00:20:12.319196951Z" level=info msg="StopPodSandbox for \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\"" May 17 00:20:12.319568 containerd[1473]: time="2025-05-17T00:20:12.319466280Z" level=info msg="TearDown network for sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" successfully" May 17 00:20:12.319568 containerd[1473]: time="2025-05-17T00:20:12.319497001Z" level=info msg="StopPodSandbox for \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" returns successfully" May 17 00:20:12.320879 containerd[1473]: time="2025-05-17T00:20:12.320526234Z" level=info msg="RemovePodSandbox for \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\"" May 17 00:20:12.320879 containerd[1473]: time="2025-05-17T00:20:12.320568035Z" level=info msg="Forcibly stopping sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\"" May 17 00:20:12.320879 containerd[1473]: time="2025-05-17T00:20:12.320626757Z" level=info msg="TearDown network for sandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" successfully" May 17 00:20:12.325349 containerd[1473]: time="2025-05-17T00:20:12.325056738Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:20:12.325349 containerd[1473]: time="2025-05-17T00:20:12.325164822Z" level=info msg="RemovePodSandbox \"9a75ab833761c524541d9dfcee1b1897679e524d1dfaa387168ff59cadf315e6\" returns successfully" May 17 00:20:12.325920 containerd[1473]: time="2025-05-17T00:20:12.325859964Z" level=info msg="StopPodSandbox for \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\"" May 17 00:20:12.327039 containerd[1473]: time="2025-05-17T00:20:12.325962007Z" level=info msg="TearDown network for sandbox \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\" successfully" May 17 00:20:12.327039 containerd[1473]: time="2025-05-17T00:20:12.325991888Z" level=info msg="StopPodSandbox for \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\" returns successfully" May 17 00:20:12.327039 containerd[1473]: time="2025-05-17T00:20:12.326394701Z" level=info msg="RemovePodSandbox for \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\"" May 17 00:20:12.327039 containerd[1473]: time="2025-05-17T00:20:12.326437342Z" level=info msg="Forcibly stopping sandbox \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\"" May 17 00:20:12.327039 containerd[1473]: time="2025-05-17T00:20:12.326513785Z" level=info msg="TearDown network for sandbox \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\" successfully" May 17 00:20:12.330548 containerd[1473]: time="2025-05-17T00:20:12.330499392Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:20:12.330650 containerd[1473]: time="2025-05-17T00:20:12.330577515Z" level=info msg="RemovePodSandbox \"a149900254a481873642026c76c1550d49738cbd268f324dc4f90e23b9b8813f\" returns successfully" May 17 00:20:18.036737 kubelet[2656]: E0517 00:20:18.036581 2656 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52954->10.0.0.2:2379: read: connection timed out" May 17 00:20:19.100238 systemd[1]: cri-containerd-5f9b6df31792ced96994a5efadaf5977e42f2b957875d70cccae4e7b7afca247.scope: Deactivated successfully. May 17 00:20:19.100674 systemd[1]: cri-containerd-5f9b6df31792ced96994a5efadaf5977e42f2b957875d70cccae4e7b7afca247.scope: Consumed 6.290s CPU time, 18.0M memory peak, 0B memory swap peak. May 17 00:20:19.131885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f9b6df31792ced96994a5efadaf5977e42f2b957875d70cccae4e7b7afca247-rootfs.mount: Deactivated successfully. May 17 00:20:19.142511 containerd[1473]: time="2025-05-17T00:20:19.142262962Z" level=info msg="shim disconnected" id=5f9b6df31792ced96994a5efadaf5977e42f2b957875d70cccae4e7b7afca247 namespace=k8s.io May 17 00:20:19.142511 containerd[1473]: time="2025-05-17T00:20:19.142381122Z" level=warning msg="cleaning up after shim disconnected" id=5f9b6df31792ced96994a5efadaf5977e42f2b957875d70cccae4e7b7afca247 namespace=k8s.io May 17 00:20:19.142511 containerd[1473]: time="2025-05-17T00:20:19.142396802Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:20:19.340469 kubelet[2656]: I0517 00:20:19.340365 2656 scope.go:117] "RemoveContainer" containerID="5f9b6df31792ced96994a5efadaf5977e42f2b957875d70cccae4e7b7afca247" May 17 00:20:19.344192 containerd[1473]: time="2025-05-17T00:20:19.343857638Z" level=info msg="CreateContainer within sandbox \"6d56ceda7bd539fbc9e718de7e0dc6ca6b07af11b11a4e6ca0c5c4d93dc63aeb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" May 17 00:20:19.358088 containerd[1473]: time="2025-05-17T00:20:19.357314886Z" level=info msg="CreateContainer within sandbox \"6d56ceda7bd539fbc9e718de7e0dc6ca6b07af11b11a4e6ca0c5c4d93dc63aeb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e226353d03ca3147814d86e3bb00562b63dcb0d1b16dcd8dfbc3c19fe0a749c1\"" May 17 00:20:19.359254 containerd[1473]: time="2025-05-17T00:20:19.359213721Z" level=info msg="StartContainer for \"e226353d03ca3147814d86e3bb00562b63dcb0d1b16dcd8dfbc3c19fe0a749c1\"" May 17 00:20:19.398402 systemd[1]: Started cri-containerd-e226353d03ca3147814d86e3bb00562b63dcb0d1b16dcd8dfbc3c19fe0a749c1.scope - libcontainer container e226353d03ca3147814d86e3bb00562b63dcb0d1b16dcd8dfbc3c19fe0a749c1. May 17 00:20:19.436418 containerd[1473]: time="2025-05-17T00:20:19.436215176Z" level=info msg="StartContainer for \"e226353d03ca3147814d86e3bb00562b63dcb0d1b16dcd8dfbc3c19fe0a749c1\" returns successfully" May 17 00:20:22.260230 kubelet[2656]: E0517 00:20:22.259512 2656 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52734->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-3-n-58e6742ed6.184028861720087c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-3-n-58e6742ed6,UID:f69e0941ca6d4cdc22ef91a8a629ef00,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-58e6742ed6,},FirstTimestamp:2025-05-17 00:20:11.798456444 +0000 UTC m=+359.652530976,LastTimestamp:2025-05-17 00:20:11.798456444 +0000 UTC m=+359.652530976,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-58e6742ed6,}" May 17 00:20:23.460564 systemd[1]: cri-containerd-a5115c08aee12d20507fa9dc5c729f15022abc6ccd1ddaa21bb1300c0fb7b395.scope: Deactivated successfully. May 17 00:20:23.460842 systemd[1]: cri-containerd-a5115c08aee12d20507fa9dc5c729f15022abc6ccd1ddaa21bb1300c0fb7b395.scope: Consumed 6.839s CPU time, 16.3M memory peak, 0B memory swap peak. May 17 00:20:23.483146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5115c08aee12d20507fa9dc5c729f15022abc6ccd1ddaa21bb1300c0fb7b395-rootfs.mount: Deactivated successfully. May 17 00:20:23.491159 containerd[1473]: time="2025-05-17T00:20:23.490905122Z" level=info msg="shim disconnected" id=a5115c08aee12d20507fa9dc5c729f15022abc6ccd1ddaa21bb1300c0fb7b395 namespace=k8s.io May 17 00:20:23.491159 containerd[1473]: time="2025-05-17T00:20:23.490973442Z" level=warning msg="cleaning up after shim disconnected" id=a5115c08aee12d20507fa9dc5c729f15022abc6ccd1ddaa21bb1300c0fb7b395 namespace=k8s.io May 17 00:20:23.491159 containerd[1473]: time="2025-05-17T00:20:23.491017242Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:20:24.359321 kubelet[2656]: I0517 00:20:24.359285 2656 scope.go:117] "RemoveContainer" containerID="a5115c08aee12d20507fa9dc5c729f15022abc6ccd1ddaa21bb1300c0fb7b395" May 17 00:20:24.361831 containerd[1473]: time="2025-05-17T00:20:24.361653595Z" level=info msg="CreateContainer within sandbox \"b85eb4161a937f7f728caac4ea5574be1f3269eba6bd6ae40a91a2482fd92c54\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" May 17 00:20:24.376353 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1184790841.mount: Deactivated successfully. May 17 00:20:24.386491 containerd[1473]: time="2025-05-17T00:20:24.386409165Z" level=info msg="CreateContainer within sandbox \"b85eb4161a937f7f728caac4ea5574be1f3269eba6bd6ae40a91a2482fd92c54\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9ca3f150de1761ae48a0378b615afdc206c330ad0893216490884727df8dbd0c\"" May 17 00:20:24.387126 containerd[1473]: time="2025-05-17T00:20:24.387068844Z" level=info msg="StartContainer for \"9ca3f150de1761ae48a0378b615afdc206c330ad0893216490884727df8dbd0c\"" May 17 00:20:24.427272 systemd[1]: Started cri-containerd-9ca3f150de1761ae48a0378b615afdc206c330ad0893216490884727df8dbd0c.scope - libcontainer container 9ca3f150de1761ae48a0378b615afdc206c330ad0893216490884727df8dbd0c. May 17 00:20:24.471521 containerd[1473]: time="2025-05-17T00:20:24.471443223Z" level=info msg="StartContainer for \"9ca3f150de1761ae48a0378b615afdc206c330ad0893216490884727df8dbd0c\" returns successfully" May 17 00:20:24.485132 systemd[1]: run-containerd-runc-k8s.io-9ca3f150de1761ae48a0378b615afdc206c330ad0893216490884727df8dbd0c-runc.OW4WYR.mount: Deactivated successfully.