May 17 00:21:08.070934 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:21:08.070971 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:08.070985 kernel: BIOS-provided physical RAM map: May 17 00:21:08.070995 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:21:08.071005 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:21:08.071015 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:21:08.071026 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable May 17 00:21:08.071036 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved May 17 00:21:08.071048 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 17 00:21:08.071058 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 17 00:21:08.071068 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:21:08.071077 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:21:08.071087 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 17 00:21:08.071097 kernel: NX (Execute Disable) protection: active May 17 00:21:08.071111 kernel: APIC: Static calls initialized May 17 00:21:08.071122 kernel: SMBIOS 3.0.0 present. May 17 00:21:08.071133 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 May 17 00:21:08.071144 kernel: Hypervisor detected: KVM May 17 00:21:08.071154 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:21:08.071165 kernel: kvm-clock: using sched offset of 3335714094 cycles May 17 00:21:08.071177 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:21:08.071188 kernel: tsc: Detected 2495.310 MHz processor May 17 00:21:08.071200 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:21:08.071214 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:21:08.071225 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 May 17 00:21:08.071236 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:21:08.071247 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:21:08.071258 kernel: Using GB pages for direct mapping May 17 00:21:08.071269 kernel: ACPI: Early table checksum verification disabled May 17 00:21:08.071279 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) May 17 00:21:08.071290 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:08.071302 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:08.071315 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:08.071326 kernel: ACPI: FACS 0x000000007CFE0000 000040 May 17 00:21:08.071337 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:08.071347 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:08.071358 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:08.071369 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:21:08.071380 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] May 17 00:21:08.071391 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] May 17 00:21:08.071408 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] May 17 00:21:08.071420 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] May 17 00:21:08.071431 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] May 17 00:21:08.071442 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] May 17 00:21:08.071454 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] May 17 00:21:08.071465 kernel: No NUMA configuration found May 17 00:21:08.071480 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] May 17 00:21:08.071491 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] May 17 00:21:08.071503 kernel: Zone ranges: May 17 00:21:08.071514 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:21:08.071525 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] May 17 00:21:08.071537 kernel: Normal empty May 17 00:21:08.071548 kernel: Movable zone start for each node May 17 00:21:08.071559 kernel: Early memory node ranges May 17 00:21:08.071571 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:21:08.071582 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] May 17 00:21:08.071596 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] May 17 00:21:08.071607 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:21:08.071619 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:21:08.071630 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 17 00:21:08.071641 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:21:08.071669 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:21:08.071682 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:21:08.071694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:21:08.071705 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:21:08.071719 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:21:08.071730 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:21:08.071742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:21:08.071753 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:21:08.071765 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:21:08.071776 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:21:08.071787 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:21:08.071799 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 17 00:21:08.071810 kernel: Booting paravirtualized kernel on KVM May 17 00:21:08.071825 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:21:08.071836 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:21:08.071848 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:21:08.071859 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:21:08.073944 kernel: pcpu-alloc: [0] 0 1 May 17 00:21:08.073963 kernel: kvm-guest: PV spinlocks disabled, no host support May 17 00:21:08.073980 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:08.073993 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:21:08.074010 kernel: random: crng init done May 17 00:21:08.074021 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:21:08.074033 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:21:08.074045 kernel: Fallback order for Node 0: 0 May 17 00:21:08.074056 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 May 17 00:21:08.074068 kernel: Policy zone: DMA32 May 17 00:21:08.074079 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:21:08.074091 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 125152K reserved, 0K cma-reserved) May 17 00:21:08.074103 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:21:08.074117 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:21:08.074129 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:21:08.074140 kernel: Dynamic Preempt: voluntary May 17 00:21:08.074152 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:21:08.074165 kernel: rcu: RCU event tracing is enabled. May 17 00:21:08.074177 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:21:08.074188 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:21:08.074200 kernel: Rude variant of Tasks RCU enabled. May 17 00:21:08.074212 kernel: Tracing variant of Tasks RCU enabled. May 17 00:21:08.074223 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:21:08.074238 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:21:08.074249 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:21:08.074261 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:21:08.074272 kernel: Console: colour VGA+ 80x25 May 17 00:21:08.074283 kernel: printk: console [tty0] enabled May 17 00:21:08.074295 kernel: printk: console [ttyS0] enabled May 17 00:21:08.074306 kernel: ACPI: Core revision 20230628 May 17 00:21:08.074318 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:21:08.074330 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:21:08.074344 kernel: x2apic enabled May 17 00:21:08.074355 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:21:08.074367 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:21:08.074378 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 17 00:21:08.074390 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) May 17 00:21:08.074401 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 17 00:21:08.074413 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 17 00:21:08.074425 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 17 00:21:08.074447 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:21:08.074459 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:21:08.074472 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:21:08.074484 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 17 00:21:08.074498 kernel: RETBleed: Mitigation: untrained return thunk May 17 00:21:08.074510 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:21:08.074522 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:21:08.074534 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:21:08.074547 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:21:08.074561 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:21:08.074573 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:21:08.074585 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. May 17 00:21:08.074597 kernel: Freeing SMP alternatives memory: 32K May 17 00:21:08.074609 kernel: pid_max: default: 32768 minimum: 301 May 17 00:21:08.074621 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:21:08.074633 kernel: landlock: Up and running. May 17 00:21:08.074660 kernel: SELinux: Initializing. May 17 00:21:08.074675 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:21:08.074689 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:21:08.074701 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) May 17 00:21:08.074714 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:08.074726 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:08.074738 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:21:08.074751 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 17 00:21:08.074763 kernel: ... version: 0 May 17 00:21:08.074775 kernel: ... bit width: 48 May 17 00:21:08.074789 kernel: ... generic registers: 6 May 17 00:21:08.074801 kernel: ... value mask: 0000ffffffffffff May 17 00:21:08.074813 kernel: ... max period: 00007fffffffffff May 17 00:21:08.074825 kernel: ... fixed-purpose events: 0 May 17 00:21:08.074837 kernel: ... event mask: 000000000000003f May 17 00:21:08.074849 kernel: signal: max sigframe size: 1776 May 17 00:21:08.074862 kernel: rcu: Hierarchical SRCU implementation. May 17 00:21:08.074906 kernel: rcu: Max phase no-delay instances is 400. May 17 00:21:08.074918 kernel: smp: Bringing up secondary CPUs ... May 17 00:21:08.074934 kernel: smpboot: x86: Booting SMP configuration: May 17 00:21:08.074946 kernel: .... node #0, CPUs: #1 May 17 00:21:08.074959 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:21:08.074971 kernel: smpboot: Max logical packages: 1 May 17 00:21:08.074983 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) May 17 00:21:08.074995 kernel: devtmpfs: initialized May 17 00:21:08.075007 kernel: x86/mm: Memory block size: 128MB May 17 00:21:08.075019 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:21:08.075031 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:21:08.075046 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:21:08.075058 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:21:08.075071 kernel: audit: initializing netlink subsys (disabled) May 17 00:21:08.075083 kernel: audit: type=2000 audit(1747441266.738:1): state=initialized audit_enabled=0 res=1 May 17 00:21:08.075095 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:21:08.075107 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:21:08.075119 kernel: cpuidle: using governor menu May 17 00:21:08.075131 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:21:08.075143 kernel: dca service started, version 1.12.1 May 17 00:21:08.075158 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 17 00:21:08.075170 kernel: PCI: Using configuration type 1 for base access May 17 00:21:08.075182 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:21:08.075195 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:21:08.075207 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 17 00:21:08.075219 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:21:08.075231 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:21:08.075243 kernel: ACPI: Added _OSI(Module Device) May 17 00:21:08.075255 kernel: ACPI: Added _OSI(Processor Device) May 17 00:21:08.075270 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:21:08.075282 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:21:08.075294 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:21:08.075305 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:21:08.075317 kernel: ACPI: Interpreter enabled May 17 00:21:08.075329 kernel: ACPI: PM: (supports S0 S5) May 17 00:21:08.075341 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:21:08.075353 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:21:08.075366 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:21:08.075380 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 17 00:21:08.075392 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:21:08.075616 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 00:21:08.075775 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 17 00:21:08.077014 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 17 00:21:08.077038 kernel: PCI host bridge to bus 0000:00 May 17 00:21:08.077171 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:21:08.077292 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:21:08.077402 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:21:08.077536 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] May 17 00:21:08.077717 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 17 00:21:08.080049 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 17 00:21:08.080170 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:21:08.080349 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 17 00:21:08.080500 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 May 17 00:21:08.080627 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] May 17 00:21:08.080776 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] May 17 00:21:08.080987 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] May 17 00:21:08.081117 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] May 17 00:21:08.081255 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:21:08.081408 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 May 17 00:21:08.081538 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] May 17 00:21:08.081684 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 May 17 00:21:08.081812 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] May 17 00:21:08.081990 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 May 17 00:21:08.082157 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] May 17 00:21:08.082369 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 May 17 00:21:08.082536 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] May 17 00:21:08.082732 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 May 17 00:21:08.082861 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] May 17 00:21:08.086993 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 May 17 00:21:08.087123 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] May 17 00:21:08.087255 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 May 17 00:21:08.087389 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] May 17 00:21:08.087519 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 May 17 00:21:08.087659 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] May 17 00:21:08.087792 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 May 17 00:21:08.087939 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] May 17 00:21:08.088070 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 17 00:21:08.088202 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 17 00:21:08.088333 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 17 00:21:08.088456 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] May 17 00:21:08.088577 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] May 17 00:21:08.088737 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 17 00:21:08.088861 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 17 00:21:08.089048 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:21:08.089178 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] May 17 00:21:08.089306 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] May 17 00:21:08.089432 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] May 17 00:21:08.089559 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:21:08.089693 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 17 00:21:08.089816 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 17 00:21:08.089997 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 May 17 00:21:08.090254 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] May 17 00:21:08.090389 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:21:08.090511 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 17 00:21:08.090634 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 17 00:21:08.090785 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 May 17 00:21:08.090991 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] May 17 00:21:08.091120 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] May 17 00:21:08.091242 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:21:08.091361 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 17 00:21:08.091481 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 17 00:21:08.091616 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 May 17 00:21:08.091758 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] May 17 00:21:08.093134 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:21:08.093288 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 17 00:21:08.093490 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 17 00:21:08.093705 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 May 17 00:21:08.093899 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] May 17 00:21:08.094086 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] May 17 00:21:08.094275 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:21:08.094458 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 17 00:21:08.094618 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 17 00:21:08.094777 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 May 17 00:21:08.095012 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] May 17 00:21:08.095196 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] May 17 00:21:08.095385 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:21:08.095574 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 17 00:21:08.095785 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 17 00:21:08.095813 kernel: acpiphp: Slot [0] registered May 17 00:21:08.096048 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 May 17 00:21:08.096194 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] May 17 00:21:08.096360 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] May 17 00:21:08.096507 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] May 17 00:21:08.096597 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:21:08.096680 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 17 00:21:08.096751 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 17 00:21:08.096764 kernel: acpiphp: Slot [0-2] registered May 17 00:21:08.096837 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:21:08.096936 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 17 00:21:08.097008 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 17 00:21:08.097018 kernel: acpiphp: Slot [0-3] registered May 17 00:21:08.097088 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:21:08.097158 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 17 00:21:08.097228 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 17 00:21:08.097238 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:21:08.097249 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:21:08.097256 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:21:08.097264 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:21:08.097271 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 17 00:21:08.097278 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 17 00:21:08.097286 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 17 00:21:08.097293 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 17 00:21:08.097301 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 17 00:21:08.097308 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 17 00:21:08.097317 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 17 00:21:08.097325 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 17 00:21:08.097332 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 17 00:21:08.097340 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 17 00:21:08.097347 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 17 00:21:08.097355 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 17 00:21:08.097362 kernel: iommu: Default domain type: Translated May 17 00:21:08.097370 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:21:08.097378 kernel: PCI: Using ACPI for IRQ routing May 17 00:21:08.097387 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:21:08.097395 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:21:08.097402 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] May 17 00:21:08.097477 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 17 00:21:08.097549 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 17 00:21:08.097643 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:21:08.097665 kernel: vgaarb: loaded May 17 00:21:08.097673 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:21:08.097682 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:21:08.097690 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:21:08.097697 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:21:08.097705 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:21:08.097712 kernel: pnp: PnP ACPI init May 17 00:21:08.097809 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved May 17 00:21:08.097822 kernel: pnp: PnP ACPI: found 5 devices May 17 00:21:08.097829 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:21:08.097839 kernel: NET: Registered PF_INET protocol family May 17 00:21:08.097846 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:21:08.097854 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:21:08.097905 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:21:08.097913 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:21:08.097921 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:21:08.097928 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:21:08.097938 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:21:08.097947 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:21:08.097961 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:21:08.097971 kernel: NET: Registered PF_XDP protocol family May 17 00:21:08.098051 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 May 17 00:21:08.098133 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 May 17 00:21:08.098218 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 May 17 00:21:08.098305 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] May 17 00:21:08.098380 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] May 17 00:21:08.098458 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] May 17 00:21:08.098559 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] May 17 00:21:08.098672 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] May 17 00:21:08.098764 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] May 17 00:21:08.098897 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] May 17 00:21:08.098995 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] May 17 00:21:08.099070 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] May 17 00:21:08.099159 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] May 17 00:21:08.099248 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] May 17 00:21:08.099341 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] May 17 00:21:08.099418 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] May 17 00:21:08.099489 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] May 17 00:21:08.099570 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] May 17 00:21:08.099662 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] May 17 00:21:08.099737 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] May 17 00:21:08.099815 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] May 17 00:21:08.099937 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] May 17 00:21:08.100014 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] May 17 00:21:08.100085 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] May 17 00:21:08.100156 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] May 17 00:21:08.100227 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] May 17 00:21:08.100297 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] May 17 00:21:08.100368 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] May 17 00:21:08.100438 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] May 17 00:21:08.100509 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] May 17 00:21:08.100580 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] May 17 00:21:08.100666 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] May 17 00:21:08.100741 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] May 17 00:21:08.100814 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] May 17 00:21:08.100927 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] May 17 00:21:08.101194 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] May 17 00:21:08.101501 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:21:08.101624 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:21:08.101704 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:21:08.102008 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] May 17 00:21:08.102101 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 17 00:21:08.102170 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 17 00:21:08.102246 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] May 17 00:21:08.102312 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] May 17 00:21:08.102385 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] May 17 00:21:08.102453 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] May 17 00:21:08.102526 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] May 17 00:21:08.102597 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] May 17 00:21:08.102682 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] May 17 00:21:08.102750 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] May 17 00:21:08.102822 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] May 17 00:21:08.102907 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] May 17 00:21:08.102982 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] May 17 00:21:08.103053 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] May 17 00:21:08.103125 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] May 17 00:21:08.103193 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] May 17 00:21:08.103258 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] May 17 00:21:08.103337 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] May 17 00:21:08.103405 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] May 17 00:21:08.103472 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] May 17 00:21:08.103550 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] May 17 00:21:08.103620 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] May 17 00:21:08.103744 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] May 17 00:21:08.103758 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 17 00:21:08.103766 kernel: PCI: CLS 0 bytes, default 64 May 17 00:21:08.103774 kernel: Initialise system trusted keyrings May 17 00:21:08.103782 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:21:08.103804 kernel: Key type asymmetric registered May 17 00:21:08.103815 kernel: Asymmetric key parser 'x509' registered May 17 00:21:08.103823 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:21:08.103830 kernel: io scheduler mq-deadline registered May 17 00:21:08.103838 kernel: io scheduler kyber registered May 17 00:21:08.103845 kernel: io scheduler bfq registered May 17 00:21:08.104546 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 May 17 00:21:08.104668 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 May 17 00:21:08.104749 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 May 17 00:21:08.104823 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 May 17 00:21:08.104926 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 May 17 00:21:08.105003 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 May 17 00:21:08.105080 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 May 17 00:21:08.105155 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 May 17 00:21:08.105234 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 May 17 00:21:08.105309 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 May 17 00:21:08.105386 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 May 17 00:21:08.105459 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 May 17 00:21:08.105538 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 May 17 00:21:08.105609 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 May 17 00:21:08.105694 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 May 17 00:21:08.105768 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 May 17 00:21:08.105780 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 17 00:21:08.105853 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 May 17 00:21:08.105982 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 May 17 00:21:08.105993 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:21:08.106004 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 May 17 00:21:08.106012 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:21:08.106019 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:21:08.106027 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:21:08.106035 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:21:08.106043 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:21:08.106125 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:21:08.106138 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:21:08.106203 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:21:08.106273 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:21:07 UTC (1747441267) May 17 00:21:08.106339 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 17 00:21:08.106350 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled May 17 00:21:08.106358 kernel: NET: Registered PF_INET6 protocol family May 17 00:21:08.106366 kernel: Segment Routing with IPv6 May 17 00:21:08.106373 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:21:08.106381 kernel: NET: Registered PF_PACKET protocol family May 17 00:21:08.106389 kernel: Key type dns_resolver registered May 17 00:21:08.106399 kernel: IPI shorthand broadcast: enabled May 17 00:21:08.106407 kernel: sched_clock: Marking stable (1287013041, 145598422)->(1442725195, -10113732) May 17 00:21:08.106414 kernel: registered taskstats version 1 May 17 00:21:08.106423 kernel: Loading compiled-in X.509 certificates May 17 00:21:08.106431 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:21:08.106439 kernel: Key type .fscrypt registered May 17 00:21:08.106446 kernel: Key type fscrypt-provisioning registered May 17 00:21:08.106454 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:21:08.106463 kernel: ima: Allocated hash algorithm: sha1 May 17 00:21:08.106472 kernel: ima: No architecture policies found May 17 00:21:08.106479 kernel: clk: Disabling unused clocks May 17 00:21:08.106487 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:21:08.106494 kernel: Write protecting the kernel read-only data: 36864k May 17 00:21:08.106502 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:21:08.106510 kernel: Run /init as init process May 17 00:21:08.106517 kernel: with arguments: May 17 00:21:08.106526 kernel: /init May 17 00:21:08.106533 kernel: with environment: May 17 00:21:08.106541 kernel: HOME=/ May 17 00:21:08.106549 kernel: TERM=linux May 17 00:21:08.106557 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:21:08.106567 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:21:08.106578 systemd[1]: Detected virtualization kvm. May 17 00:21:08.106586 systemd[1]: Detected architecture x86-64. May 17 00:21:08.106594 systemd[1]: Running in initrd. May 17 00:21:08.106604 systemd[1]: No hostname configured, using default hostname. May 17 00:21:08.106611 systemd[1]: Hostname set to . May 17 00:21:08.106620 systemd[1]: Initializing machine ID from VM UUID. May 17 00:21:08.106628 systemd[1]: Queued start job for default target initrd.target. May 17 00:21:08.106636 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:21:08.106653 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:21:08.106662 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:21:08.106670 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:21:08.106680 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:21:08.106688 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:21:08.106698 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:21:08.106706 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:21:08.106714 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:21:08.106722 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:21:08.106730 systemd[1]: Reached target paths.target - Path Units. May 17 00:21:08.106740 systemd[1]: Reached target slices.target - Slice Units. May 17 00:21:08.106747 systemd[1]: Reached target swap.target - Swaps. May 17 00:21:08.106755 systemd[1]: Reached target timers.target - Timer Units. May 17 00:21:08.106763 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:21:08.106771 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:21:08.106779 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:21:08.106787 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:21:08.106795 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:21:08.106803 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:21:08.106812 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:21:08.106820 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:21:08.106828 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:21:08.106836 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:21:08.106844 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:21:08.106852 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:21:08.106860 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:21:08.106884 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:21:08.106894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:08.106902 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:21:08.106910 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:21:08.106918 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:21:08.106945 systemd-journald[188]: Collecting audit messages is disabled. May 17 00:21:08.106968 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:21:08.106977 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:21:08.106985 kernel: Bridge firewalling registered May 17 00:21:08.106993 systemd-journald[188]: Journal started May 17 00:21:08.107013 systemd-journald[188]: Runtime Journal (/run/log/journal/0990f4228d464da29e2dd2543d9269f2) is 4.8M, max 38.4M, 33.6M free. May 17 00:21:08.061898 systemd-modules-load[189]: Inserted module 'overlay' May 17 00:21:08.137168 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:21:08.103824 systemd-modules-load[189]: Inserted module 'br_netfilter' May 17 00:21:08.142208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:21:08.142835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:08.151088 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:08.153094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:21:08.155157 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:21:08.158499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:21:08.165727 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:08.167730 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:21:08.170013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:21:08.173934 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:21:08.176842 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:21:08.186021 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:21:08.186671 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:21:08.189767 dracut-cmdline[216]: dracut-dracut-053 May 17 00:21:08.193913 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:21:08.212099 systemd-resolved[224]: Positive Trust Anchors: May 17 00:21:08.212728 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:21:08.212760 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:21:08.221110 systemd-resolved[224]: Defaulting to hostname 'linux'. May 17 00:21:08.222028 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:21:08.222858 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:21:08.244920 kernel: SCSI subsystem initialized May 17 00:21:08.253917 kernel: Loading iSCSI transport class v2.0-870. May 17 00:21:08.264906 kernel: iscsi: registered transport (tcp) May 17 00:21:08.284007 kernel: iscsi: registered transport (qla4xxx) May 17 00:21:08.284057 kernel: QLogic iSCSI HBA Driver May 17 00:21:08.328953 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:21:08.336100 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:21:08.374400 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:21:08.374499 kernel: device-mapper: uevent: version 1.0.3 May 17 00:21:08.376450 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:21:08.434936 kernel: raid6: avx2x4 gen() 15695 MB/s May 17 00:21:08.452932 kernel: raid6: avx2x2 gen() 17904 MB/s May 17 00:21:08.470103 kernel: raid6: avx2x1 gen() 20425 MB/s May 17 00:21:08.470202 kernel: raid6: using algorithm avx2x1 gen() 20425 MB/s May 17 00:21:08.488918 kernel: raid6: .... xor() 15754 MB/s, rmw enabled May 17 00:21:08.489001 kernel: raid6: using avx2x2 recovery algorithm May 17 00:21:08.507913 kernel: xor: automatically using best checksumming function avx May 17 00:21:08.695938 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:21:08.709376 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:21:08.717187 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:21:08.728516 systemd-udevd[406]: Using default interface naming scheme 'v255'. May 17 00:21:08.733428 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:21:08.744287 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:21:08.760560 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation May 17 00:21:08.803745 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:21:08.810082 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:21:08.858841 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:21:08.869180 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:21:08.902369 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:21:08.905228 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:21:08.907976 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:21:08.909988 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:21:08.919223 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:21:08.945133 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:21:08.948699 kernel: scsi host0: Virtio SCSI HBA May 17 00:21:08.960739 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 May 17 00:21:08.960816 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:21:08.972233 kernel: ACPI: bus type USB registered May 17 00:21:08.972295 kernel: usbcore: registered new interface driver usbfs May 17 00:21:08.970604 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:21:08.977676 kernel: usbcore: registered new interface driver hub May 17 00:21:08.970780 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:08.973691 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:08.976117 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:08.976270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:08.976734 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:08.988241 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:09.017909 kernel: usbcore: registered new device driver usb May 17 00:21:09.054893 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:21:09.054948 kernel: AES CTR mode by8 optimization enabled May 17 00:21:09.054959 kernel: libata version 3.00 loaded. May 17 00:21:09.071998 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:09.079272 kernel: ahci 0000:00:1f.2: version 3.0 May 17 00:21:09.080004 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 17 00:21:09.084603 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 17 00:21:09.084778 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 17 00:21:09.086468 kernel: sd 0:0:0:0: Power-on or device reset occurred May 17 00:21:09.089350 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) May 17 00:21:09.089477 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:21:09.089568 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 May 17 00:21:09.089842 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:21:09.092920 kernel: scsi host1: ahci May 17 00:21:09.093044 kernel: scsi host2: ahci May 17 00:21:09.097553 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA May 17 00:21:09.097730 kernel: scsi host3: ahci May 17 00:21:09.100101 kernel: scsi host4: ahci May 17 00:21:09.102878 kernel: scsi host5: ahci May 17 00:21:09.106553 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:21:09.106575 kernel: scsi host6: ahci May 17 00:21:09.106693 kernel: GPT:17805311 != 80003071 May 17 00:21:09.106703 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 May 17 00:21:09.106719 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:21:09.106728 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 May 17 00:21:09.106737 kernel: GPT:17805311 != 80003071 May 17 00:21:09.106745 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:21:09.106753 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 May 17 00:21:09.106762 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:09.106771 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 May 17 00:21:09.108877 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:21:09.108999 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 May 17 00:21:09.119681 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 May 17 00:21:09.120673 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:09.434357 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 17 00:21:09.434483 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 17 00:21:09.436878 kernel: ata3: SATA link down (SStatus 0 SControl 300) May 17 00:21:09.440008 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 17 00:21:09.447017 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 17 00:21:09.447082 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 17 00:21:09.447103 kernel: ata1.00: applying bridge limits May 17 00:21:09.447891 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 17 00:21:09.449904 kernel: ata1.00: configured for UDMA/100 May 17 00:21:09.455905 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 17 00:21:09.486392 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:21:09.486728 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 May 17 00:21:09.493301 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 May 17 00:21:09.501716 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller May 17 00:21:09.502034 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 May 17 00:21:09.507413 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed May 17 00:21:09.507752 kernel: hub 1-0:1.0: USB hub found May 17 00:21:09.510905 kernel: hub 1-0:1.0: 4 ports detected May 17 00:21:09.520041 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 17 00:21:09.520406 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 00:21:09.520759 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:21:09.521253 kernel: hub 2-0:1.0: USB hub found May 17 00:21:09.544152 kernel: hub 2-0:1.0: 4 ports detected May 17 00:21:09.544385 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (459) May 17 00:21:09.547906 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 May 17 00:21:09.552882 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (458) May 17 00:21:09.562209 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. May 17 00:21:09.573394 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. May 17 00:21:09.579274 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:21:09.583638 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. May 17 00:21:09.584284 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. May 17 00:21:09.590002 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:21:09.597998 disk-uuid[572]: Primary Header is updated. May 17 00:21:09.597998 disk-uuid[572]: Secondary Entries is updated. May 17 00:21:09.597998 disk-uuid[572]: Secondary Header is updated. May 17 00:21:09.607989 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:09.619892 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:09.763909 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd May 17 00:21:09.910084 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:21:09.916894 kernel: usbcore: registered new interface driver usbhid May 17 00:21:09.916939 kernel: usbhid: USB HID core driver May 17 00:21:09.927785 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 May 17 00:21:09.927832 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 May 17 00:21:10.631919 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:21:10.633819 disk-uuid[573]: The operation has completed successfully. May 17 00:21:10.701553 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:21:10.701737 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:21:10.747208 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:21:10.750838 sh[593]: Success May 17 00:21:10.767923 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 17 00:21:10.822619 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:21:10.824097 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:21:10.826322 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:21:10.848292 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:21:10.848356 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:10.850630 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:21:10.853061 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:21:10.856071 kernel: BTRFS info (device dm-0): using free space tree May 17 00:21:10.865900 kernel: BTRFS info (device dm-0): enabling ssd optimizations May 17 00:21:10.868570 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:21:10.870352 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:21:10.876090 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:21:10.881098 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:21:10.903007 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:10.906996 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:10.907023 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:10.915298 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:10.915344 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:10.925105 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:21:10.928374 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:10.933892 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:21:10.943068 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:21:10.988847 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:21:10.996023 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:21:11.021996 systemd-networkd[774]: lo: Link UP May 17 00:21:11.022004 systemd-networkd[774]: lo: Gained carrier May 17 00:21:11.023594 systemd-networkd[774]: Enumeration completed May 17 00:21:11.023694 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:21:11.024582 systemd[1]: Reached target network.target - Network. May 17 00:21:11.025198 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:11.025200 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:21:11.028227 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:11.028231 systemd-networkd[774]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:21:11.028745 systemd-networkd[774]: eth0: Link UP May 17 00:21:11.028749 systemd-networkd[774]: eth0: Gained carrier May 17 00:21:11.028755 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:11.033599 systemd-networkd[774]: eth1: Link UP May 17 00:21:11.033602 systemd-networkd[774]: eth1: Gained carrier May 17 00:21:11.033611 systemd-networkd[774]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:11.038811 ignition[714]: Ignition 2.19.0 May 17 00:21:11.039431 ignition[714]: Stage: fetch-offline May 17 00:21:11.039468 ignition[714]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:11.039476 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:11.040666 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:21:11.039555 ignition[714]: parsed url from cmdline: "" May 17 00:21:11.039558 ignition[714]: no config URL provided May 17 00:21:11.039563 ignition[714]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:21:11.039570 ignition[714]: no config at "/usr/lib/ignition/user.ign" May 17 00:21:11.039575 ignition[714]: failed to fetch config: resource requires networking May 17 00:21:11.039789 ignition[714]: Ignition finished successfully May 17 00:21:11.046005 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:21:11.056337 ignition[782]: Ignition 2.19.0 May 17 00:21:11.056955 ignition[782]: Stage: fetch May 17 00:21:11.057172 ignition[782]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:11.057186 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:11.057291 ignition[782]: parsed url from cmdline: "" May 17 00:21:11.057296 ignition[782]: no config URL provided May 17 00:21:11.057302 ignition[782]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:21:11.057312 ignition[782]: no config at "/usr/lib/ignition/user.ign" May 17 00:21:11.057334 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 May 17 00:21:11.057496 ignition[782]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable May 17 00:21:11.066949 systemd-networkd[774]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:21:11.098973 systemd-networkd[774]: eth0: DHCPv4 address 37.27.213.195/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:21:11.258031 ignition[782]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 May 17 00:21:11.264008 ignition[782]: GET result: OK May 17 00:21:11.264123 ignition[782]: parsing config with SHA512: a90c6f1496d7f2b3435f7cfd2935a1c66835beb606935c959b4571827e3854dd6a3a38b9f6e5f08a28a7c2d505731c6f6925267ce6c49ac38b3783facaf9509d May 17 00:21:11.271501 unknown[782]: fetched base config from "system" May 17 00:21:11.272184 unknown[782]: fetched base config from "system" May 17 00:21:11.272199 unknown[782]: fetched user config from "hetzner" May 17 00:21:11.273288 ignition[782]: fetch: fetch complete May 17 00:21:11.276205 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:21:11.273301 ignition[782]: fetch: fetch passed May 17 00:21:11.273386 ignition[782]: Ignition finished successfully May 17 00:21:11.287266 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:21:11.313455 ignition[789]: Ignition 2.19.0 May 17 00:21:11.313474 ignition[789]: Stage: kargs May 17 00:21:11.313782 ignition[789]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:11.313798 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:11.317409 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:21:11.315525 ignition[789]: kargs: kargs passed May 17 00:21:11.315592 ignition[789]: Ignition finished successfully May 17 00:21:11.325144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:21:11.347120 ignition[796]: Ignition 2.19.0 May 17 00:21:11.348579 ignition[796]: Stage: disks May 17 00:21:11.349835 ignition[796]: no configs at "/usr/lib/ignition/base.d" May 17 00:21:11.349853 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:11.353060 ignition[796]: disks: disks passed May 17 00:21:11.353135 ignition[796]: Ignition finished successfully May 17 00:21:11.354585 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:21:11.357349 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:21:11.358809 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:21:11.361219 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:21:11.363611 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:21:11.365822 systemd[1]: Reached target basic.target - Basic System. May 17 00:21:11.373126 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:21:11.404768 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks May 17 00:21:11.409958 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:21:11.418251 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:21:11.538178 kernel: EXT4-fs (sda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:21:11.538721 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:21:11.539624 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:21:11.544969 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:21:11.547606 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:21:11.550124 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:21:11.551752 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:21:11.551783 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:21:11.555831 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:21:11.561886 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) May 17 00:21:11.562732 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:21:11.578221 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:11.578258 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:11.578372 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:11.578400 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:11.578428 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:11.579348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:21:11.632267 coreos-metadata[814]: May 17 00:21:11.632 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 May 17 00:21:11.634026 coreos-metadata[814]: May 17 00:21:11.633 INFO Fetch successful May 17 00:21:11.635234 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:21:11.637458 coreos-metadata[814]: May 17 00:21:11.636 INFO wrote hostname ci-4081-3-3-n-decaff31fa to /sysroot/etc/hostname May 17 00:21:11.638204 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:21:11.641898 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory May 17 00:21:11.644265 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:21:11.647291 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:21:11.715789 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:21:11.719006 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:21:11.723007 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:21:11.725437 kernel: BTRFS info (device sda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:11.749951 ignition[929]: INFO : Ignition 2.19.0 May 17 00:21:11.749951 ignition[929]: INFO : Stage: mount May 17 00:21:11.749951 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:11.749951 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:11.754820 ignition[929]: INFO : mount: mount passed May 17 00:21:11.754820 ignition[929]: INFO : Ignition finished successfully May 17 00:21:11.753441 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:21:11.756324 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:21:11.767986 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:21:11.846843 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:21:11.853182 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:21:11.870958 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) May 17 00:21:11.876461 kernel: BTRFS info (device sda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:21:11.876520 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:21:11.881389 kernel: BTRFS info (device sda6): using free space tree May 17 00:21:11.889095 kernel: BTRFS info (device sda6): enabling ssd optimizations May 17 00:21:11.889152 kernel: BTRFS info (device sda6): auto enabling async discard May 17 00:21:11.895971 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:21:11.940705 ignition[959]: INFO : Ignition 2.19.0 May 17 00:21:11.940705 ignition[959]: INFO : Stage: files May 17 00:21:11.943785 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:11.943785 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:11.943785 ignition[959]: DEBUG : files: compiled without relabeling support, skipping May 17 00:21:11.949119 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:21:11.949119 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:21:11.952375 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:21:11.952375 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:21:11.952375 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:21:11.950826 unknown[959]: wrote ssh authorized keys file for user: core May 17 00:21:11.958173 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:21:11.958173 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 May 17 00:21:12.131584 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:21:12.134051 systemd-networkd[774]: eth0: Gained IPv6LL May 17 00:21:12.518058 systemd-networkd[774]: eth1: Gained IPv6LL May 17 00:21:14.815171 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" May 17 00:21:14.815171 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:21:14.820087 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 17 00:21:15.498328 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 00:21:15.820314 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 17 00:21:15.820314 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:21:15.824434 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 May 17 00:21:16.658445 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 17 00:21:19.622218 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" May 17 00:21:19.622218 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 17 00:21:19.625980 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:21:19.625980 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:21:19.625980 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 17 00:21:19.625980 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 17 00:21:19.625980 ignition[959]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:21:19.625980 ignition[959]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" May 17 00:21:19.625980 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 17 00:21:19.625980 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 17 00:21:19.625980 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:21:19.625980 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:21:19.625980 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:21:19.625980 ignition[959]: INFO : files: files passed May 17 00:21:19.625980 ignition[959]: INFO : Ignition finished successfully May 17 00:21:19.626199 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:21:19.639634 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:21:19.643552 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:21:19.648453 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:21:19.648559 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:21:19.664216 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:19.664216 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:19.668817 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:21:19.667704 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:21:19.670637 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:21:19.681193 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:21:19.712146 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:21:19.712316 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:21:19.714763 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:21:19.716080 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:21:19.716587 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:21:19.725006 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:21:19.743791 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:21:19.753118 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:21:19.768893 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:21:19.770148 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:21:19.772189 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:21:19.773795 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:21:19.774033 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:21:19.774875 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:21:19.775525 systemd[1]: Stopped target basic.target - Basic System. May 17 00:21:19.777256 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:21:19.779139 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:21:19.780732 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:21:19.782305 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:21:19.784505 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:21:19.786725 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:21:19.788850 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:21:19.790877 systemd[1]: Stopped target swap.target - Swaps. May 17 00:21:19.792911 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:21:19.793081 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:21:19.795738 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:21:19.797047 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:21:19.798749 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:21:19.798912 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:21:19.800671 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:21:19.800857 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:21:19.803709 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:21:19.803893 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:21:19.805022 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:21:19.805157 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:21:19.807123 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:21:19.807261 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:21:19.817973 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:21:19.822190 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:21:19.822593 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:21:19.836143 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:21:19.839031 ignition[1012]: INFO : Ignition 2.19.0 May 17 00:21:19.839031 ignition[1012]: INFO : Stage: umount May 17 00:21:19.839031 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:21:19.839031 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" May 17 00:21:19.836958 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:21:19.854652 ignition[1012]: INFO : umount: umount passed May 17 00:21:19.854652 ignition[1012]: INFO : Ignition finished successfully May 17 00:21:19.837124 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:21:19.842157 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:21:19.842313 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:21:19.845456 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:21:19.845585 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:21:19.859175 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:21:19.860607 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:21:19.862505 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:21:19.862611 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:21:19.865888 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:21:19.865946 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:21:19.867039 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:21:19.867101 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:21:19.868397 systemd[1]: Stopped target network.target - Network. May 17 00:21:19.869932 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:21:19.869991 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:21:19.871821 systemd[1]: Stopped target paths.target - Path Units. May 17 00:21:19.873271 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:21:19.877023 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:21:19.878276 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:21:19.879825 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:21:19.881590 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:21:19.881636 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:21:19.883431 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:21:19.883478 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:21:19.884931 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:21:19.884985 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:21:19.886718 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:21:19.886780 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:21:19.888842 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:21:19.890323 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:21:19.893386 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:21:19.894213 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:21:19.894355 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:21:19.895906 systemd-networkd[774]: eth0: DHCPv6 lease lost May 17 00:21:19.897105 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:21:19.897197 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:21:19.898908 systemd-networkd[774]: eth1: DHCPv6 lease lost May 17 00:21:19.900110 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:21:19.900354 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:21:19.903708 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:21:19.903845 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:21:19.905697 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:21:19.906131 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:21:19.913975 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:21:19.915075 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:21:19.915131 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:21:19.917488 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:21:19.917538 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:21:19.919135 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:21:19.919182 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:21:19.920505 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:21:19.920550 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:21:19.922176 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:21:19.932214 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:21:19.932374 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:21:19.933485 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:21:19.933575 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:21:19.935221 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:21:19.935295 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:21:19.936432 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:21:19.936470 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:21:19.937778 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:21:19.937828 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:21:19.939779 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:21:19.939826 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:21:19.941343 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:21:19.941393 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:21:19.952035 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:21:19.953799 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:21:19.953855 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:21:19.954610 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:19.954654 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:19.957472 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:21:19.957558 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:21:19.959699 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:21:19.967010 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:21:19.974217 systemd[1]: Switching root. May 17 00:21:20.016524 systemd-journald[188]: Journal stopped May 17 00:21:21.106593 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). May 17 00:21:21.106641 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:21:21.106659 kernel: SELinux: policy capability open_perms=1 May 17 00:21:21.106670 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:21:21.106678 kernel: SELinux: policy capability always_check_network=0 May 17 00:21:21.106695 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:21:21.106704 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:21:21.106713 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:21:21.106725 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:21:21.106734 kernel: audit: type=1403 audit(1747441280.244:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:21:21.106744 systemd[1]: Successfully loaded SELinux policy in 57.800ms. May 17 00:21:21.106763 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.565ms. May 17 00:21:21.106774 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:21:21.106784 systemd[1]: Detected virtualization kvm. May 17 00:21:21.106794 systemd[1]: Detected architecture x86-64. May 17 00:21:21.106803 systemd[1]: Detected first boot. May 17 00:21:21.106815 systemd[1]: Hostname set to . May 17 00:21:21.106824 systemd[1]: Initializing machine ID from VM UUID. May 17 00:21:21.106833 zram_generator::config[1055]: No configuration found. May 17 00:21:21.106847 systemd[1]: Populated /etc with preset unit settings. May 17 00:21:21.106856 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:21:21.106875 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 17 00:21:21.106885 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:21:21.106895 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:21:21.106905 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:21:21.106914 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:21:21.106923 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:21:21.106933 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:21:21.106944 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:21:21.106955 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:21:21.106964 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:21:21.106978 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:21:21.106987 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:21:21.106997 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:21:21.107006 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:21:21.107016 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:21:21.107028 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:21:21.107038 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:21:21.107047 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:21:21.107057 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 17 00:21:21.107067 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 17 00:21:21.107076 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 17 00:21:21.107088 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:21:21.107100 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:21:21.107111 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:21:21.107120 systemd[1]: Reached target slices.target - Slice Units. May 17 00:21:21.107130 systemd[1]: Reached target swap.target - Swaps. May 17 00:21:21.107139 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:21:21.107149 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:21:21.107158 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:21:21.107168 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:21:21.107178 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:21:21.107189 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:21:21.107198 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:21:21.107207 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:21:21.107217 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:21:21.107230 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:21.107241 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:21:21.107251 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:21:21.107261 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:21:21.107271 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:21:21.107280 systemd[1]: Reached target machines.target - Containers. May 17 00:21:21.107290 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:21:21.107300 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:21.107310 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:21:21.107319 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:21:21.107330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:21:21.107340 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:21:21.107350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:21:21.107360 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:21:21.107370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:21:21.107381 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:21:21.107391 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:21:21.107404 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 17 00:21:21.107414 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:21:21.107424 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:21:21.107434 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:21:21.107443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:21:21.107453 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:21:21.107463 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:21:21.107472 kernel: loop: module loaded May 17 00:21:21.107481 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:21:21.107490 kernel: fuse: init (API version 7.39) May 17 00:21:21.107499 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:21:21.107510 systemd[1]: Stopped verity-setup.service. May 17 00:21:21.107520 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:21.107541 systemd-journald[1135]: Collecting audit messages is disabled. May 17 00:21:21.107561 kernel: ACPI: bus type drm_connector registered May 17 00:21:21.107571 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:21:21.107581 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:21:21.107591 systemd-journald[1135]: Journal started May 17 00:21:21.107612 systemd-journald[1135]: Runtime Journal (/run/log/journal/0990f4228d464da29e2dd2543d9269f2) is 4.8M, max 38.4M, 33.6M free. May 17 00:21:20.813557 systemd[1]: Queued start job for default target multi-user.target. May 17 00:21:20.829672 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 17 00:21:20.830454 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:21:21.109539 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:21:21.111233 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:21:21.111739 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:21:21.112401 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:21:21.112992 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:21:21.113613 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:21:21.114437 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:21:21.115104 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:21:21.115208 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:21:21.115852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:21:21.116038 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:21:21.116652 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:21:21.116755 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:21:21.117371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:21:21.117467 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:21:21.118135 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:21:21.118230 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:21:21.118889 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:21:21.118989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:21:21.120124 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:21:21.120754 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:21:21.121474 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:21:21.128673 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:21:21.134944 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:21:21.138966 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:21:21.139578 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:21:21.139607 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:21:21.142642 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:21:21.152253 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:21:21.160998 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:21:21.161915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:21.166213 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:21:21.174106 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:21:21.174637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:21:21.182759 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:21:21.183592 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:21:21.188937 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:21:21.196956 systemd-journald[1135]: Time spent on flushing to /var/log/journal/0990f4228d464da29e2dd2543d9269f2 is 49.202ms for 1130 entries. May 17 00:21:21.196956 systemd-journald[1135]: System Journal (/var/log/journal/0990f4228d464da29e2dd2543d9269f2) is 8.0M, max 584.8M, 576.8M free. May 17 00:21:21.272092 systemd-journald[1135]: Received client request to flush runtime journal. May 17 00:21:21.272124 kernel: loop0: detected capacity change from 0 to 224512 May 17 00:21:21.201095 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:21:21.206012 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:21:21.209023 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:21:21.211288 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:21:21.211816 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:21:21.214110 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:21:21.214910 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:21:21.219609 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:21:21.230214 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:21:21.232970 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:21:21.248900 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:21:21.266913 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 17 00:21:21.273070 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:21:21.279194 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:21:21.293125 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:21:21.295233 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:21:21.302544 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:21:21.313993 kernel: loop1: detected capacity change from 0 to 8 May 17 00:21:21.311726 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:21:21.332052 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. May 17 00:21:21.332375 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. May 17 00:21:21.333881 kernel: loop2: detected capacity change from 0 to 142488 May 17 00:21:21.337648 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:21:21.381889 kernel: loop3: detected capacity change from 0 to 140768 May 17 00:21:21.429012 kernel: loop4: detected capacity change from 0 to 224512 May 17 00:21:21.461007 kernel: loop5: detected capacity change from 0 to 8 May 17 00:21:21.464917 kernel: loop6: detected capacity change from 0 to 142488 May 17 00:21:21.485916 kernel: loop7: detected capacity change from 0 to 140768 May 17 00:21:21.514525 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. May 17 00:21:21.515243 (sd-merge)[1200]: Merged extensions into '/usr'. May 17 00:21:21.520193 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:21:21.520319 systemd[1]: Reloading... May 17 00:21:21.605092 zram_generator::config[1229]: No configuration found. May 17 00:21:21.700742 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:21.753361 systemd[1]: Reloading finished in 232 ms. May 17 00:21:21.758128 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:21:21.774294 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:21:21.776535 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:21:21.784512 systemd[1]: Starting ensure-sysext.service... May 17 00:21:21.786992 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:21:21.795736 systemd[1]: Reloading requested from client PID 1269 ('systemctl') (unit ensure-sysext.service)... May 17 00:21:21.795877 systemd[1]: Reloading... May 17 00:21:21.801889 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:21:21.802171 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:21:21.805085 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:21:21.805386 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 17 00:21:21.805853 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. May 17 00:21:21.812969 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:21:21.813421 systemd-tmpfiles[1270]: Skipping /boot May 17 00:21:21.822099 systemd-tmpfiles[1270]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:21:21.823309 systemd-tmpfiles[1270]: Skipping /boot May 17 00:21:21.849797 zram_generator::config[1293]: No configuration found. May 17 00:21:21.947579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:21:21.999558 systemd[1]: Reloading finished in 202 ms. May 17 00:21:22.012256 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:21:22.013314 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:21:22.028522 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:21:22.032220 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:21:22.042343 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:21:22.047990 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:21:22.057017 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:21:22.058354 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:21:22.064058 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:22.064194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:22.068060 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:21:22.072021 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:21:22.080045 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:21:22.080589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:22.080683 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:22.090931 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:21:22.095029 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:22.095259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:22.095555 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:22.095999 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:22.099555 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:22.099879 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:22.107590 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:21:22.108180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:22.108295 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:22.108817 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:21:22.109583 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:21:22.109675 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:21:22.116381 systemd[1]: Finished ensure-sysext.service. May 17 00:21:22.117101 systemd-udevd[1353]: Using default interface naming scheme 'v255'. May 17 00:21:22.123757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:21:22.123923 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:21:22.126685 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:21:22.137596 augenrules[1372]: No rules May 17 00:21:22.139977 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:21:22.141055 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:21:22.142430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:21:22.142980 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:21:22.144602 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:21:22.145071 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:21:22.148176 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:21:22.151645 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:21:22.157378 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:21:22.165749 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:21:22.170722 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:21:22.172327 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:21:22.181009 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:21:22.190172 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:21:22.190929 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:21:22.234918 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 17 00:21:22.273600 systemd-networkd[1395]: lo: Link UP May 17 00:21:22.273612 systemd-networkd[1395]: lo: Gained carrier May 17 00:21:22.275261 systemd-networkd[1395]: Enumeration completed May 17 00:21:22.275325 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:21:22.283030 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:21:22.284215 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:22.284219 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:21:22.284815 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:22.284818 systemd-networkd[1395]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:21:22.285304 systemd-networkd[1395]: eth0: Link UP May 17 00:21:22.285307 systemd-networkd[1395]: eth0: Gained carrier May 17 00:21:22.285318 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:22.288278 systemd-networkd[1395]: eth1: Link UP May 17 00:21:22.288291 systemd-networkd[1395]: eth1: Gained carrier May 17 00:21:22.288310 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:22.306128 systemd-networkd[1395]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:22.321812 systemd-resolved[1352]: Positive Trust Anchors: May 17 00:21:22.324332 systemd-networkd[1395]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 May 17 00:21:22.326096 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:21:22.326131 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:21:22.327352 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:21:22.328018 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:21:22.331043 systemd-resolved[1352]: Using system hostname 'ci-4081-3-3-n-decaff31fa'. May 17 00:21:22.332636 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:21:22.333726 systemd[1]: Reached target network.target - Network. May 17 00:21:22.334347 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:21:22.338924 systemd-networkd[1395]: eth0: DHCPv4 address 37.27.213.195/32, gateway 172.31.1.1 acquired from 172.31.1.1 May 17 00:21:22.339469 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. May 17 00:21:22.339839 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. May 17 00:21:22.346550 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:21:22.365912 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1404) May 17 00:21:22.384940 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 May 17 00:21:22.402649 kernel: ACPI: button: Power Button [PWRF] May 17 00:21:22.401840 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. May 17 00:21:22.401893 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:22.401972 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:21:22.409395 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:21:22.416101 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:21:22.419201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:21:22.420474 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:21:22.420513 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:21:22.420525 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:21:22.420812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:21:22.420988 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:21:22.425879 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 May 17 00:21:22.427184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:21:22.427288 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:21:22.428532 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:21:22.428894 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console May 17 00:21:22.434886 kernel: Console: switching to colour dummy device 80x25 May 17 00:21:22.435388 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. May 17 00:21:22.436712 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:21:22.436732 kernel: [drm] features: -context_init May 17 00:21:22.439148 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 17 00:21:22.439308 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 17 00:21:22.440891 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 17 00:21:22.441851 kernel: [drm] number of scanouts: 1 May 17 00:21:22.441891 kernel: [drm] number of cap sets: 0 May 17 00:21:22.442880 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 May 17 00:21:22.445048 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:21:22.445427 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:21:22.445896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:21:22.447129 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:21:22.448897 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 17 00:21:22.449891 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:21:22.458192 kernel: Console: switching to colour frame buffer device 160x50 May 17 00:21:22.461883 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 May 17 00:21:22.464930 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:21:22.474701 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:21:22.493015 kernel: EDAC MC: Ver: 3.0.0 May 17 00:21:22.525175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:22.531779 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:22.531953 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:22.535641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:22.542518 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:21:22.542768 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:22.546046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:21:22.628236 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:21:22.660483 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:21:22.667126 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:21:22.696356 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:21:22.729509 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:21:22.730217 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:21:22.731038 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:21:22.731392 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:21:22.731598 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:21:22.732184 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:21:22.732542 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:21:22.732728 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:21:22.732835 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:21:22.733450 systemd[1]: Reached target paths.target - Path Units. May 17 00:21:22.733601 systemd[1]: Reached target timers.target - Timer Units. May 17 00:21:22.735605 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:21:22.739532 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:21:22.752732 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:21:22.763099 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:21:22.767043 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:21:22.770550 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:21:22.771619 systemd[1]: Reached target basic.target - Basic System. May 17 00:21:22.773155 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:21:22.774538 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:21:22.774590 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:21:22.781092 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:21:22.794086 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:21:22.799148 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:21:22.805096 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:21:22.809098 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:21:22.809835 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:21:22.815127 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:21:22.827038 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 17 00:21:22.834009 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. May 17 00:21:22.838090 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:21:22.843046 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:21:22.851283 jq[1466]: false May 17 00:21:22.851365 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:21:22.852311 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:21:22.852730 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:21:22.861010 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:21:22.865008 coreos-metadata[1462]: May 17 00:21:22.864 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 May 17 00:21:22.868163 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:21:22.869198 coreos-metadata[1462]: May 17 00:21:22.868 INFO Fetch successful May 17 00:21:22.869198 coreos-metadata[1462]: May 17 00:21:22.869 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 May 17 00:21:22.869894 coreos-metadata[1462]: May 17 00:21:22.869 INFO Fetch successful May 17 00:21:22.871920 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:21:22.890177 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:21:22.890318 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:21:22.892142 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:21:22.892271 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:21:22.894422 update_engine[1475]: I20250517 00:21:22.894320 1475 main.cc:92] Flatcar Update Engine starting May 17 00:21:22.915434 extend-filesystems[1467]: Found loop4 May 17 00:21:22.920658 extend-filesystems[1467]: Found loop5 May 17 00:21:22.920658 extend-filesystems[1467]: Found loop6 May 17 00:21:22.920658 extend-filesystems[1467]: Found loop7 May 17 00:21:22.920658 extend-filesystems[1467]: Found sda May 17 00:21:22.920658 extend-filesystems[1467]: Found sda1 May 17 00:21:22.920658 extend-filesystems[1467]: Found sda2 May 17 00:21:22.920658 extend-filesystems[1467]: Found sda3 May 17 00:21:22.920658 extend-filesystems[1467]: Found usr May 17 00:21:22.920658 extend-filesystems[1467]: Found sda4 May 17 00:21:22.920658 extend-filesystems[1467]: Found sda6 May 17 00:21:22.920658 extend-filesystems[1467]: Found sda7 May 17 00:21:23.007856 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks May 17 00:21:23.010007 jq[1477]: true May 17 00:21:23.010122 update_engine[1475]: I20250517 00:21:22.974731 1475 update_check_scheduler.cc:74] Next update check in 5m0s May 17 00:21:22.931202 (ntainerd)[1490]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:21:22.931442 dbus-daemon[1463]: [system] SELinux support is enabled May 17 00:21:23.010577 extend-filesystems[1467]: Found sda9 May 17 00:21:23.010577 extend-filesystems[1467]: Checking size of /dev/sda9 May 17 00:21:23.010577 extend-filesystems[1467]: Resized partition /dev/sda9 May 17 00:21:23.023817 tar[1483]: linux-amd64/LICENSE May 17 00:21:23.023817 tar[1483]: linux-amd64/helm May 17 00:21:22.931654 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:21:23.043326 extend-filesystems[1510]: resize2fs 1.47.1 (20-May-2024) May 17 00:21:22.942981 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:21:22.943919 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:21:23.048443 jq[1498]: true May 17 00:21:22.945455 systemd-logind[1474]: New seat seat0. May 17 00:21:22.949751 systemd-logind[1474]: Watching system buttons on /dev/input/event2 (Power Button) May 17 00:21:22.949769 systemd-logind[1474]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:21:22.950554 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:21:22.950588 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:21:22.964420 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:21:22.964441 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:21:22.974774 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:21:22.994977 systemd[1]: Started update-engine.service - Update Engine. May 17 00:21:23.017942 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:21:23.059805 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:21:23.060979 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:21:23.116825 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1393) May 17 00:21:23.155417 bash[1532]: Updated "/home/core/.ssh/authorized_keys" May 17 00:21:23.153645 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:21:23.167122 systemd[1]: Starting sshkeys.service... May 17 00:21:23.172896 kernel: EXT4-fs (sda9): resized filesystem to 9393147 May 17 00:21:23.204118 extend-filesystems[1510]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required May 17 00:21:23.204118 extend-filesystems[1510]: old_desc_blocks = 1, new_desc_blocks = 5 May 17 00:21:23.204118 extend-filesystems[1510]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. May 17 00:21:23.216759 extend-filesystems[1467]: Resized filesystem in /dev/sda9 May 17 00:21:23.216759 extend-filesystems[1467]: Found sr0 May 17 00:21:23.208094 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:21:23.208243 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:21:23.221205 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:21:23.228514 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:21:23.241454 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:21:23.295181 coreos-metadata[1547]: May 17 00:21:23.293 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 May 17 00:21:23.295181 coreos-metadata[1547]: May 17 00:21:23.294 INFO Fetch successful May 17 00:21:23.297171 unknown[1547]: wrote ssh authorized keys file for user: core May 17 00:21:23.315702 containerd[1490]: time="2025-05-17T00:21:23.315603624Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:21:23.322912 update-ssh-keys[1551]: Updated "/home/core/.ssh/authorized_keys" May 17 00:21:23.323589 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:21:23.327335 systemd[1]: Finished sshkeys.service. May 17 00:21:23.394656 containerd[1490]: time="2025-05-17T00:21:23.394598146Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:23.400230 containerd[1490]: time="2025-05-17T00:21:23.400198360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:23.400230 containerd[1490]: time="2025-05-17T00:21:23.400228466Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:21:23.400304 containerd[1490]: time="2025-05-17T00:21:23.400242823Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:21:23.400388 containerd[1490]: time="2025-05-17T00:21:23.400371896Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:21:23.400410 containerd[1490]: time="2025-05-17T00:21:23.400391532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:21:23.400466 containerd[1490]: time="2025-05-17T00:21:23.400449120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:23.400490 containerd[1490]: time="2025-05-17T00:21:23.400466152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:23.401003 containerd[1490]: time="2025-05-17T00:21:23.400983443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:23.401024 containerd[1490]: time="2025-05-17T00:21:23.401004242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:21:23.401024 containerd[1490]: time="2025-05-17T00:21:23.401018879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:23.401053 containerd[1490]: time="2025-05-17T00:21:23.401027986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:21:23.401106 containerd[1490]: time="2025-05-17T00:21:23.401090413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:23.401340 containerd[1490]: time="2025-05-17T00:21:23.401268617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:21:23.401840 containerd[1490]: time="2025-05-17T00:21:23.401822096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:21:23.401860 containerd[1490]: time="2025-05-17T00:21:23.401841522Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:21:23.402110 containerd[1490]: time="2025-05-17T00:21:23.402093935Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:21:23.402151 containerd[1490]: time="2025-05-17T00:21:23.402138078Z" level=info msg="metadata content store policy set" policy=shared May 17 00:21:23.407297 containerd[1490]: time="2025-05-17T00:21:23.407274362Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.407665776Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.407683739Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.407733753Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.407749262Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.407845633Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.408418638Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.408532150Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.408548020Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.408558600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.408572276Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.408583277Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.408593385Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.408604897Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:21:23.409873 containerd[1490]: time="2025-05-17T00:21:23.408616880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408629243Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408640634Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408650673Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408669959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408683915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408705807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408718170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408730102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408742124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408752544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408763755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408775627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408791657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410094 containerd[1490]: time="2025-05-17T00:21:23.408802147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.408815112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.408825952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.408841140Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.408858543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.408882588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.408896293Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.409450944Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.409471062Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.409481301Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.409738774Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.409753361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.409771575Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.409781293Z" level=info msg="NRI interface is disabled by configuration." May 17 00:21:23.410342 containerd[1490]: time="2025-05-17T00:21:23.409790221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:21:23.410551 containerd[1490]: time="2025-05-17T00:21:23.410057221Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:21:23.410551 containerd[1490]: time="2025-05-17T00:21:23.410107576Z" level=info msg="Connect containerd service" May 17 00:21:23.410551 containerd[1490]: time="2025-05-17T00:21:23.410133304Z" level=info msg="using legacy CRI server" May 17 00:21:23.410551 containerd[1490]: time="2025-05-17T00:21:23.410138935Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:21:23.415893 containerd[1490]: time="2025-05-17T00:21:23.415504899Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:21:23.416523 containerd[1490]: time="2025-05-17T00:21:23.416503612Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:21:23.418125 containerd[1490]: time="2025-05-17T00:21:23.418048860Z" level=info msg="Start subscribing containerd event" May 17 00:21:23.418125 containerd[1490]: time="2025-05-17T00:21:23.418099786Z" level=info msg="Start recovering state" May 17 00:21:23.418211 containerd[1490]: time="2025-05-17T00:21:23.418198130Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:21:23.418406 containerd[1490]: time="2025-05-17T00:21:23.418392214Z" level=info msg="Start event monitor" May 17 00:21:23.418428 containerd[1490]: time="2025-05-17T00:21:23.418409847Z" level=info msg="Start snapshots syncer" May 17 00:21:23.418428 containerd[1490]: time="2025-05-17T00:21:23.418417351Z" level=info msg="Start cni network conf syncer for default" May 17 00:21:23.418428 containerd[1490]: time="2025-05-17T00:21:23.418423342Z" level=info msg="Start streaming server" May 17 00:21:23.420889 containerd[1490]: time="2025-05-17T00:21:23.419904070Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:21:23.420889 containerd[1490]: time="2025-05-17T00:21:23.419957259Z" level=info msg="containerd successfully booted in 0.106304s" May 17 00:21:23.420033 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:21:23.590023 systemd-networkd[1395]: eth1: Gained IPv6LL May 17 00:21:23.590822 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. May 17 00:21:23.595532 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:21:23.599174 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:21:23.608010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:23.610170 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:21:23.650365 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:21:23.700788 sshd_keygen[1504]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:21:23.720400 tar[1483]: linux-amd64/README.md May 17 00:21:23.720775 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:21:23.730556 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:21:23.733118 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 17 00:21:23.741740 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:21:23.741928 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:21:23.751336 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:21:23.760279 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:21:23.769253 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:21:23.773000 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:21:23.773793 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:21:23.846058 systemd-networkd[1395]: eth0: Gained IPv6LL May 17 00:21:23.847142 systemd-timesyncd[1373]: Network configuration changed, trying to establish connection. May 17 00:21:24.903675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:24.905528 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:21:24.910375 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:24.910682 systemd[1]: Startup finished in 1.473s (kernel) + 12.458s (initrd) + 4.722s (userspace) = 18.654s. May 17 00:21:25.761672 kubelet[1594]: E0517 00:21:25.761585 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:25.765018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:25.765195 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:25.765477 systemd[1]: kubelet.service: Consumed 1.471s CPU time. May 17 00:21:36.016005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:21:36.023685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:36.159793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:36.162411 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:36.207266 kubelet[1613]: E0517 00:21:36.207186 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:36.212251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:36.212421 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:46.463057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:21:46.468154 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:46.578514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:46.591190 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:46.634598 kubelet[1628]: E0517 00:21:46.634499 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:46.637703 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:46.637845 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:21:54.992440 systemd-resolved[1352]: Clock change detected. Flushing caches. May 17 00:21:54.992684 systemd-timesyncd[1373]: Contacted time server 85.215.166.214:123 (2.flatcar.pool.ntp.org). May 17 00:21:54.992751 systemd-timesyncd[1373]: Initial clock synchronization to Sat 2025-05-17 00:21:54.992337 UTC. May 17 00:21:57.321526 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:21:57.330906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:21:57.504925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:21:57.517926 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:21:57.576757 kubelet[1644]: E0517 00:21:57.576607 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:21:57.579659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:21:57.579851 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:07.658934 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:22:07.665870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:07.819993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:07.824385 (kubelet)[1658]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:07.885778 kubelet[1658]: E0517 00:22:07.885647 1658 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:07.888706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:07.888869 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:08.734141 update_engine[1475]: I20250517 00:22:08.733956 1475 update_attempter.cc:509] Updating boot flags... May 17 00:22:08.791710 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1674) May 17 00:22:08.867689 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1673) May 17 00:22:08.912220 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1673) May 17 00:22:17.908678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:22:17.920053 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:18.069637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:18.083009 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:18.148868 kubelet[1694]: E0517 00:22:18.148780 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:18.152527 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:18.152792 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:28.158391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 17 00:22:28.166807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:28.286856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:28.289227 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:28.336404 kubelet[1708]: E0517 00:22:28.336327 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:28.339337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:28.339493 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:38.409039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. May 17 00:22:38.415954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:38.563082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:38.566391 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:38.596903 kubelet[1723]: E0517 00:22:38.596846 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:38.599862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:38.599976 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:48.658872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. May 17 00:22:48.664953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:48.808705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:48.812008 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:48.845437 kubelet[1739]: E0517 00:22:48.845363 1739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:48.848528 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:48.848696 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:22:58.908993 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. May 17 00:22:58.919966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:22:59.065619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:22:59.069230 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:22:59.101493 kubelet[1754]: E0517 00:22:59.101431 1754 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:22:59.103472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:22:59.103646 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:02.437263 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:23:02.443016 systemd[1]: Started sshd@0-37.27.213.195:22-139.178.89.65:58802.service - OpenSSH per-connection server daemon (139.178.89.65:58802). May 17 00:23:03.424021 sshd[1762]: Accepted publickey for core from 139.178.89.65 port 58802 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:03.427164 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:03.441679 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:23:03.447974 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:23:03.452354 systemd-logind[1474]: New session 1 of user core. May 17 00:23:03.470270 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:23:03.479090 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:23:03.483550 (systemd)[1766]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:23:03.637390 systemd[1766]: Queued start job for default target default.target. May 17 00:23:03.641544 systemd[1766]: Created slice app.slice - User Application Slice. May 17 00:23:03.641663 systemd[1766]: Reached target paths.target - Paths. May 17 00:23:03.641676 systemd[1766]: Reached target timers.target - Timers. May 17 00:23:03.642863 systemd[1766]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:23:03.664171 systemd[1766]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:23:03.664343 systemd[1766]: Reached target sockets.target - Sockets. May 17 00:23:03.664369 systemd[1766]: Reached target basic.target - Basic System. May 17 00:23:03.664427 systemd[1766]: Reached target default.target - Main User Target. May 17 00:23:03.664468 systemd[1766]: Startup finished in 171ms. May 17 00:23:03.664493 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:23:03.674833 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:23:04.365145 systemd[1]: Started sshd@1-37.27.213.195:22-139.178.89.65:58812.service - OpenSSH per-connection server daemon (139.178.89.65:58812). May 17 00:23:05.336268 sshd[1777]: Accepted publickey for core from 139.178.89.65 port 58812 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:05.338041 sshd[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:05.344360 systemd-logind[1474]: New session 2 of user core. May 17 00:23:05.353901 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:23:06.009569 sshd[1777]: pam_unix(sshd:session): session closed for user core May 17 00:23:06.012730 systemd[1]: sshd@1-37.27.213.195:22-139.178.89.65:58812.service: Deactivated successfully. May 17 00:23:06.014428 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:23:06.015706 systemd-logind[1474]: Session 2 logged out. Waiting for processes to exit. May 17 00:23:06.017083 systemd-logind[1474]: Removed session 2. May 17 00:23:06.184381 systemd[1]: Started sshd@2-37.27.213.195:22-139.178.89.65:58816.service - OpenSSH per-connection server daemon (139.178.89.65:58816). May 17 00:23:07.172932 sshd[1784]: Accepted publickey for core from 139.178.89.65 port 58816 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:07.174642 sshd[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:07.179849 systemd-logind[1474]: New session 3 of user core. May 17 00:23:07.186890 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:23:07.846215 sshd[1784]: pam_unix(sshd:session): session closed for user core May 17 00:23:07.850819 systemd[1]: sshd@2-37.27.213.195:22-139.178.89.65:58816.service: Deactivated successfully. May 17 00:23:07.854232 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:23:07.856432 systemd-logind[1474]: Session 3 logged out. Waiting for processes to exit. May 17 00:23:07.858387 systemd-logind[1474]: Removed session 3. May 17 00:23:08.022759 systemd[1]: Started sshd@3-37.27.213.195:22-139.178.89.65:44556.service - OpenSSH per-connection server daemon (139.178.89.65:44556). May 17 00:23:09.001147 sshd[1791]: Accepted publickey for core from 139.178.89.65 port 44556 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:09.003663 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:09.012678 systemd-logind[1474]: New session 4 of user core. May 17 00:23:09.019953 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:23:09.158749 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. May 17 00:23:09.165374 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:09.328335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:09.332240 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:23:09.383062 kubelet[1802]: E0517 00:23:09.382931 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:23:09.385766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:23:09.386038 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:09.682430 sshd[1791]: pam_unix(sshd:session): session closed for user core May 17 00:23:09.686177 systemd[1]: sshd@3-37.27.213.195:22-139.178.89.65:44556.service: Deactivated successfully. May 17 00:23:09.688429 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:23:09.689378 systemd-logind[1474]: Session 4 logged out. Waiting for processes to exit. May 17 00:23:09.690674 systemd-logind[1474]: Removed session 4. May 17 00:23:09.854009 systemd[1]: Started sshd@4-37.27.213.195:22-139.178.89.65:44562.service - OpenSSH per-connection server daemon (139.178.89.65:44562). May 17 00:23:10.824459 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 44562 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:10.826266 sshd[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:10.833413 systemd-logind[1474]: New session 5 of user core. May 17 00:23:10.842973 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:23:11.352935 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:23:11.353389 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:11.371107 sudo[1816]: pam_unix(sudo:session): session closed for user root May 17 00:23:11.529303 sshd[1813]: pam_unix(sshd:session): session closed for user core May 17 00:23:11.533871 systemd[1]: sshd@4-37.27.213.195:22-139.178.89.65:44562.service: Deactivated successfully. May 17 00:23:11.536332 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:23:11.538727 systemd-logind[1474]: Session 5 logged out. Waiting for processes to exit. May 17 00:23:11.540841 systemd-logind[1474]: Removed session 5. May 17 00:23:11.702406 systemd[1]: Started sshd@5-37.27.213.195:22-139.178.89.65:44578.service - OpenSSH per-connection server daemon (139.178.89.65:44578). May 17 00:23:12.696515 sshd[1821]: Accepted publickey for core from 139.178.89.65 port 44578 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:12.698395 sshd[1821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:12.705115 systemd-logind[1474]: New session 6 of user core. May 17 00:23:12.711832 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:23:13.218245 sudo[1825]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:23:13.218726 sudo[1825]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:13.224343 sudo[1825]: pam_unix(sudo:session): session closed for user root May 17 00:23:13.232563 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:23:13.233050 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:13.252172 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:23:13.256099 auditctl[1828]: No rules May 17 00:23:13.256983 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:23:13.257251 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:23:13.263659 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:23:13.300012 augenrules[1846]: No rules May 17 00:23:13.301741 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:23:13.303693 sudo[1824]: pam_unix(sudo:session): session closed for user root May 17 00:23:13.462500 sshd[1821]: pam_unix(sshd:session): session closed for user core May 17 00:23:13.466352 systemd[1]: sshd@5-37.27.213.195:22-139.178.89.65:44578.service: Deactivated successfully. May 17 00:23:13.468486 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:23:13.470613 systemd-logind[1474]: Session 6 logged out. Waiting for processes to exit. May 17 00:23:13.472204 systemd-logind[1474]: Removed session 6. May 17 00:23:13.638187 systemd[1]: Started sshd@6-37.27.213.195:22-139.178.89.65:44586.service - OpenSSH per-connection server daemon (139.178.89.65:44586). May 17 00:23:14.624910 sshd[1854]: Accepted publickey for core from 139.178.89.65 port 44586 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:23:14.626672 sshd[1854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:14.634130 systemd-logind[1474]: New session 7 of user core. May 17 00:23:14.643938 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:23:15.140542 sudo[1857]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:23:15.140874 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:15.549922 systemd[1]: Starting docker.service - Docker Application Container Engine... May 17 00:23:15.550031 (dockerd)[1873]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 17 00:23:15.963047 dockerd[1873]: time="2025-05-17T00:23:15.962973605Z" level=info msg="Starting up" May 17 00:23:16.096873 systemd[1]: var-lib-docker-metacopy\x2dcheck447989280-merged.mount: Deactivated successfully. May 17 00:23:16.127976 dockerd[1873]: time="2025-05-17T00:23:16.127782010Z" level=info msg="Loading containers: start." May 17 00:23:16.272664 kernel: Initializing XFRM netlink socket May 17 00:23:16.390019 systemd-networkd[1395]: docker0: Link UP May 17 00:23:16.410760 dockerd[1873]: time="2025-05-17T00:23:16.410694653Z" level=info msg="Loading containers: done." May 17 00:23:16.433324 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1358420833-merged.mount: Deactivated successfully. May 17 00:23:16.435776 dockerd[1873]: time="2025-05-17T00:23:16.435665864Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:23:16.436019 dockerd[1873]: time="2025-05-17T00:23:16.435884594Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 17 00:23:16.436066 dockerd[1873]: time="2025-05-17T00:23:16.436045716Z" level=info msg="Daemon has completed initialization" May 17 00:23:16.483758 dockerd[1873]: time="2025-05-17T00:23:16.483651174Z" level=info msg="API listen on /run/docker.sock" May 17 00:23:16.484136 systemd[1]: Started docker.service - Docker Application Container Engine. May 17 00:23:17.836124 containerd[1490]: time="2025-05-17T00:23:17.836037879Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 17 00:23:18.476623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171037645.mount: Deactivated successfully. May 17 00:23:19.408454 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. May 17 00:23:19.414901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:19.541748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:19.547962 (kubelet)[2070]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:23:19.561170 containerd[1490]: time="2025-05-17T00:23:19.561095383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:19.562976 containerd[1490]: time="2025-05-17T00:23:19.562926679Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=28797905" May 17 00:23:19.565398 containerd[1490]: time="2025-05-17T00:23:19.564201009Z" level=info msg="ImageCreate event name:\"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:19.567688 containerd[1490]: time="2025-05-17T00:23:19.567654778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:19.569538 containerd[1490]: time="2025-05-17T00:23:19.569505279Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"28794611\" in 1.733419669s" May 17 00:23:19.569643 containerd[1490]: time="2025-05-17T00:23:19.569630454Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:495c5ce47cf7c8b58655ef50d0f0a9b43c5ae18492059dc9af4c9aacae82a5a4\"" May 17 00:23:19.570257 containerd[1490]: time="2025-05-17T00:23:19.570237232Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 17 00:23:19.591246 kubelet[2070]: E0517 00:23:19.591199 2070 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:23:19.593670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:23:19.593907 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:20.768334 containerd[1490]: time="2025-05-17T00:23:20.768267592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:20.769628 containerd[1490]: time="2025-05-17T00:23:20.769572539Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=24782545" May 17 00:23:20.770801 containerd[1490]: time="2025-05-17T00:23:20.770744798Z" level=info msg="ImageCreate event name:\"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:20.773913 containerd[1490]: time="2025-05-17T00:23:20.773869199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:20.774918 containerd[1490]: time="2025-05-17T00:23:20.774879224Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"26384363\" in 1.204526986s" May 17 00:23:20.774970 containerd[1490]: time="2025-05-17T00:23:20.774921543Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:85dcaf69f000132c34fa34452e0fd8444bdf360b593fe06b1103680f6ecc7e00\"" May 17 00:23:20.775523 containerd[1490]: time="2025-05-17T00:23:20.775393789Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 17 00:23:21.838190 containerd[1490]: time="2025-05-17T00:23:21.838117307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:21.839428 containerd[1490]: time="2025-05-17T00:23:21.839378632Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=19176085" May 17 00:23:21.842600 containerd[1490]: time="2025-05-17T00:23:21.841700898Z" level=info msg="ImageCreate event name:\"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:21.845080 containerd[1490]: time="2025-05-17T00:23:21.845050101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:21.846070 containerd[1490]: time="2025-05-17T00:23:21.846047913Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"20777921\" in 1.070627554s" May 17 00:23:21.846161 containerd[1490]: time="2025-05-17T00:23:21.846124035Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:2729fb488407e634105c62238a45a599db1692680526e20844060a7a8197b45a\"" May 17 00:23:21.846715 containerd[1490]: time="2025-05-17T00:23:21.846696529Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 17 00:23:22.897556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913786800.mount: Deactivated successfully. May 17 00:23:23.265098 containerd[1490]: time="2025-05-17T00:23:23.265013070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:23.266547 containerd[1490]: time="2025-05-17T00:23:23.266495992Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=30892900" May 17 00:23:23.267978 containerd[1490]: time="2025-05-17T00:23:23.267925133Z" level=info msg="ImageCreate event name:\"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:23.270303 containerd[1490]: time="2025-05-17T00:23:23.270246918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:23.271006 containerd[1490]: time="2025-05-17T00:23:23.270891847Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"30891891\" in 1.424163929s" May 17 00:23:23.271006 containerd[1490]: time="2025-05-17T00:23:23.270922495Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:f532b7356fac4d7c4e4f6763bb5a15a43e3bb740c9fb26c85b906a4d971f2363\"" May 17 00:23:23.271420 containerd[1490]: time="2025-05-17T00:23:23.271384752Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:23:23.798648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2644298585.mount: Deactivated successfully. May 17 00:23:24.538952 containerd[1490]: time="2025-05-17T00:23:24.538874531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:24.539992 containerd[1490]: time="2025-05-17T00:23:24.539943086Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" May 17 00:23:24.540902 containerd[1490]: time="2025-05-17T00:23:24.540862089Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:24.543413 containerd[1490]: time="2025-05-17T00:23:24.543357490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:24.544545 containerd[1490]: time="2025-05-17T00:23:24.544428568Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.273018339s" May 17 00:23:24.544545 containerd[1490]: time="2025-05-17T00:23:24.544458244Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:23:24.545182 containerd[1490]: time="2025-05-17T00:23:24.545049643Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:23:25.023715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount129551427.mount: Deactivated successfully. May 17 00:23:25.030770 containerd[1490]: time="2025-05-17T00:23:25.030677058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:25.032180 containerd[1490]: time="2025-05-17T00:23:25.032086592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" May 17 00:23:25.033137 containerd[1490]: time="2025-05-17T00:23:25.033096176Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:25.037988 containerd[1490]: time="2025-05-17T00:23:25.037886201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:25.039198 containerd[1490]: time="2025-05-17T00:23:25.039017663Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 493.941018ms" May 17 00:23:25.039198 containerd[1490]: time="2025-05-17T00:23:25.039066724Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:23:25.040616 containerd[1490]: time="2025-05-17T00:23:25.040542032Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 17 00:23:25.570766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280373082.mount: Deactivated successfully. May 17 00:23:27.351726 containerd[1490]: time="2025-05-17T00:23:27.351647141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:27.353417 containerd[1490]: time="2025-05-17T00:23:27.353331750Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551430" May 17 00:23:27.354185 containerd[1490]: time="2025-05-17T00:23:27.354109319Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:27.358438 containerd[1490]: time="2025-05-17T00:23:27.358406428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:27.360392 containerd[1490]: time="2025-05-17T00:23:27.360193991Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.319579141s" May 17 00:23:27.360392 containerd[1490]: time="2025-05-17T00:23:27.360250477Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" May 17 00:23:29.658663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. May 17 00:23:29.668823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:29.875722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:29.876397 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:23:29.914087 kubelet[2232]: E0517 00:23:29.913912 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:23:29.916198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:23:29.916317 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:31.232783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:31.242954 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:31.275107 systemd[1]: Reloading requested from client PID 2246 ('systemctl') (unit session-7.scope)... May 17 00:23:31.275119 systemd[1]: Reloading... May 17 00:23:31.368638 zram_generator::config[2289]: No configuration found. May 17 00:23:31.460429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:31.537181 systemd[1]: Reloading finished in 261 ms. May 17 00:23:31.578542 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:23:31.578747 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:23:31.579009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:31.581544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:31.683008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:31.692946 (kubelet)[2341]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:23:31.738803 kubelet[2341]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:31.738803 kubelet[2341]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:23:31.738803 kubelet[2341]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:31.739243 kubelet[2341]: I0517 00:23:31.738872 2341 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:23:32.186629 kubelet[2341]: I0517 00:23:32.186253 2341 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:23:32.186629 kubelet[2341]: I0517 00:23:32.186278 2341 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:23:32.186629 kubelet[2341]: I0517 00:23:32.186510 2341 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:23:32.230549 kubelet[2341]: E0517 00:23:32.230462 2341 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://37.27.213.195:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 37.27.213.195:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:32.232496 kubelet[2341]: I0517 00:23:32.232335 2341 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:23:32.255813 kubelet[2341]: E0517 00:23:32.255742 2341 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:23:32.255813 kubelet[2341]: I0517 00:23:32.255795 2341 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:23:32.263190 kubelet[2341]: I0517 00:23:32.263123 2341 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:23:32.265838 kubelet[2341]: I0517 00:23:32.265762 2341 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:23:32.266106 kubelet[2341]: I0517 00:23:32.265806 2341 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-decaff31fa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:23:32.268458 kubelet[2341]: I0517 00:23:32.268406 2341 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:23:32.268458 kubelet[2341]: I0517 00:23:32.268439 2341 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:23:32.270147 kubelet[2341]: I0517 00:23:32.270098 2341 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:32.276558 kubelet[2341]: I0517 00:23:32.276134 2341 kubelet.go:446] "Attempting to sync node with API server" May 17 00:23:32.276558 kubelet[2341]: I0517 00:23:32.276181 2341 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:23:32.276558 kubelet[2341]: I0517 00:23:32.276212 2341 kubelet.go:352] "Adding apiserver pod source" May 17 00:23:32.276558 kubelet[2341]: I0517 00:23:32.276229 2341 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:23:32.286028 kubelet[2341]: W0517 00:23:32.285290 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.213.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-decaff31fa&limit=500&resourceVersion=0": dial tcp 37.27.213.195:6443: connect: connection refused May 17 00:23:32.286028 kubelet[2341]: E0517 00:23:32.285400 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.213.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-decaff31fa&limit=500&resourceVersion=0\": dial tcp 37.27.213.195:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:32.286705 kubelet[2341]: W0517 00:23:32.286653 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.213.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.213.195:6443: connect: connection refused May 17 00:23:32.286848 kubelet[2341]: E0517 00:23:32.286817 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.213.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.213.195:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:32.288897 kubelet[2341]: I0517 00:23:32.288869 2341 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:23:32.294672 kubelet[2341]: I0517 00:23:32.294647 2341 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:23:32.296312 kubelet[2341]: W0517 00:23:32.295804 2341 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:23:32.298960 kubelet[2341]: I0517 00:23:32.298917 2341 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:23:32.298960 kubelet[2341]: I0517 00:23:32.298963 2341 server.go:1287] "Started kubelet" May 17 00:23:32.299329 kubelet[2341]: I0517 00:23:32.299281 2341 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:23:32.302684 kubelet[2341]: I0517 00:23:32.301716 2341 server.go:479] "Adding debug handlers to kubelet server" May 17 00:23:32.306341 kubelet[2341]: I0517 00:23:32.305914 2341 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:23:32.306341 kubelet[2341]: I0517 00:23:32.306273 2341 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:23:32.312981 kubelet[2341]: I0517 00:23:32.312088 2341 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:23:32.315907 kubelet[2341]: E0517 00:23:32.308280 2341 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://37.27.213.195:6443/api/v1/namespaces/default/events\": dial tcp 37.27.213.195:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-3-n-decaff31fa.184028b4c5e29230 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-3-n-decaff31fa,UID:ci-4081-3-3-n-decaff31fa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-3-n-decaff31fa,},FirstTimestamp:2025-05-17 00:23:32.29893688 +0000 UTC m=+0.603342431,LastTimestamp:2025-05-17 00:23:32.29893688 +0000 UTC m=+0.603342431,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-3-n-decaff31fa,}" May 17 00:23:32.318640 kubelet[2341]: I0517 00:23:32.317785 2341 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:23:32.321960 kubelet[2341]: I0517 00:23:32.321869 2341 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:23:32.324425 kubelet[2341]: E0517 00:23:32.324396 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:32.325103 kubelet[2341]: I0517 00:23:32.324716 2341 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:23:32.325103 kubelet[2341]: I0517 00:23:32.324767 2341 reconciler.go:26] "Reconciler: start to sync state" May 17 00:23:32.325103 kubelet[2341]: E0517 00:23:32.324981 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.213.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-decaff31fa?timeout=10s\": dial tcp 37.27.213.195:6443: connect: connection refused" interval="200ms" May 17 00:23:32.327444 kubelet[2341]: W0517 00:23:32.325401 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.213.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.213.195:6443: connect: connection refused May 17 00:23:32.327444 kubelet[2341]: E0517 00:23:32.325454 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.213.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.213.195:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:32.330938 kubelet[2341]: I0517 00:23:32.330914 2341 factory.go:221] Registration of the systemd container factory successfully May 17 00:23:32.331014 kubelet[2341]: I0517 00:23:32.330992 2341 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:23:32.331452 kubelet[2341]: E0517 00:23:32.331437 2341 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:23:32.332097 kubelet[2341]: I0517 00:23:32.332083 2341 factory.go:221] Registration of the containerd container factory successfully May 17 00:23:32.346527 kubelet[2341]: I0517 00:23:32.346489 2341 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:23:32.347571 kubelet[2341]: I0517 00:23:32.347425 2341 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:23:32.347571 kubelet[2341]: I0517 00:23:32.347444 2341 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:23:32.347571 kubelet[2341]: I0517 00:23:32.347470 2341 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:23:32.347571 kubelet[2341]: I0517 00:23:32.347476 2341 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:23:32.347571 kubelet[2341]: E0517 00:23:32.347510 2341 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:23:32.350341 kubelet[2341]: W0517 00:23:32.350317 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.213.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.213.195:6443: connect: connection refused May 17 00:23:32.350391 kubelet[2341]: E0517 00:23:32.350344 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.213.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.213.195:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:32.353666 kubelet[2341]: I0517 00:23:32.353248 2341 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:23:32.353666 kubelet[2341]: I0517 00:23:32.353258 2341 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:23:32.353666 kubelet[2341]: I0517 00:23:32.353269 2341 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:32.355256 kubelet[2341]: I0517 00:23:32.355241 2341 policy_none.go:49] "None policy: Start" May 17 00:23:32.355256 kubelet[2341]: I0517 00:23:32.355255 2341 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:23:32.355315 kubelet[2341]: I0517 00:23:32.355263 2341 state_mem.go:35] "Initializing new in-memory state store" May 17 00:23:32.360083 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 17 00:23:32.380302 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 17 00:23:32.383060 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 17 00:23:32.390385 kubelet[2341]: I0517 00:23:32.390356 2341 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:23:32.390543 kubelet[2341]: I0517 00:23:32.390521 2341 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:23:32.390573 kubelet[2341]: I0517 00:23:32.390537 2341 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:23:32.390989 kubelet[2341]: I0517 00:23:32.390773 2341 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:23:32.392085 kubelet[2341]: E0517 00:23:32.392071 2341 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:23:32.392135 kubelet[2341]: E0517 00:23:32.392108 2341 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:32.463471 systemd[1]: Created slice kubepods-burstable-pod2e4324800664cc5ee36f8cc8229861c8.slice - libcontainer container kubepods-burstable-pod2e4324800664cc5ee36f8cc8229861c8.slice. May 17 00:23:32.474091 kubelet[2341]: E0517 00:23:32.473749 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-decaff31fa\" not found" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:32.482066 systemd[1]: Created slice kubepods-burstable-poddfa9ff6ace0fe83cf24a0c6bedb7b2a5.slice - libcontainer container kubepods-burstable-poddfa9ff6ace0fe83cf24a0c6bedb7b2a5.slice. May 17 00:23:32.487066 kubelet[2341]: E0517 00:23:32.486729 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-decaff31fa\" not found" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:32.490519 systemd[1]: Created slice kubepods-burstable-pod6a36a1c06acc2504fe1c714cf46ad472.slice - libcontainer container kubepods-burstable-pod6a36a1c06acc2504fe1c714cf46ad472.slice. May 17 00:23:32.493577 kubelet[2341]: E0517 00:23:32.493106 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-decaff31fa\" not found" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:32.493577 kubelet[2341]: I0517 00:23:32.493308 2341 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:32.494079 kubelet[2341]: E0517 00:23:32.494032 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.213.195:6443/api/v1/nodes\": dial tcp 37.27.213.195:6443: connect: connection refused" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:32.525837 kubelet[2341]: E0517 00:23:32.525749 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.213.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-decaff31fa?timeout=10s\": dial tcp 37.27.213.195:6443: connect: connection refused" interval="400ms" May 17 00:23:32.526790 kubelet[2341]: I0517 00:23:32.526745 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e4324800664cc5ee36f8cc8229861c8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-decaff31fa\" (UID: \"2e4324800664cc5ee36f8cc8229861c8\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" May 17 00:23:32.527262 kubelet[2341]: I0517 00:23:32.526793 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a36a1c06acc2504fe1c714cf46ad472-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" (UID: \"6a36a1c06acc2504fe1c714cf46ad472\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:32.527262 kubelet[2341]: I0517 00:23:32.526825 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a36a1c06acc2504fe1c714cf46ad472-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" (UID: \"6a36a1c06acc2504fe1c714cf46ad472\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:32.527262 kubelet[2341]: I0517 00:23:32.526900 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dfa9ff6ace0fe83cf24a0c6bedb7b2a5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-decaff31fa\" (UID: \"dfa9ff6ace0fe83cf24a0c6bedb7b2a5\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-decaff31fa" May 17 00:23:32.527262 kubelet[2341]: I0517 00:23:32.526937 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e4324800664cc5ee36f8cc8229861c8-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-decaff31fa\" (UID: \"2e4324800664cc5ee36f8cc8229861c8\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" May 17 00:23:32.527262 kubelet[2341]: I0517 00:23:32.526965 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e4324800664cc5ee36f8cc8229861c8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-decaff31fa\" (UID: \"2e4324800664cc5ee36f8cc8229861c8\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" May 17 00:23:32.527481 kubelet[2341]: I0517 00:23:32.526998 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a36a1c06acc2504fe1c714cf46ad472-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" (UID: \"6a36a1c06acc2504fe1c714cf46ad472\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:32.527481 kubelet[2341]: I0517 00:23:32.527070 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a36a1c06acc2504fe1c714cf46ad472-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" (UID: \"6a36a1c06acc2504fe1c714cf46ad472\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:32.527481 kubelet[2341]: I0517 00:23:32.527103 2341 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a36a1c06acc2504fe1c714cf46ad472-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" (UID: \"6a36a1c06acc2504fe1c714cf46ad472\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:32.696752 kubelet[2341]: I0517 00:23:32.696666 2341 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:32.697163 kubelet[2341]: E0517 00:23:32.697105 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.213.195:6443/api/v1/nodes\": dial tcp 37.27.213.195:6443: connect: connection refused" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:32.776290 containerd[1490]: time="2025-05-17T00:23:32.776145249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-decaff31fa,Uid:2e4324800664cc5ee36f8cc8229861c8,Namespace:kube-system,Attempt:0,}" May 17 00:23:32.788710 containerd[1490]: time="2025-05-17T00:23:32.788663102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-decaff31fa,Uid:dfa9ff6ace0fe83cf24a0c6bedb7b2a5,Namespace:kube-system,Attempt:0,}" May 17 00:23:32.794220 containerd[1490]: time="2025-05-17T00:23:32.794181950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-decaff31fa,Uid:6a36a1c06acc2504fe1c714cf46ad472,Namespace:kube-system,Attempt:0,}" May 17 00:23:32.927090 kubelet[2341]: E0517 00:23:32.926992 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.213.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-decaff31fa?timeout=10s\": dial tcp 37.27.213.195:6443: connect: connection refused" interval="800ms" May 17 00:23:33.100260 kubelet[2341]: I0517 00:23:33.100121 2341 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:33.100670 kubelet[2341]: E0517 00:23:33.100512 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.213.195:6443/api/v1/nodes\": dial tcp 37.27.213.195:6443: connect: connection refused" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:33.247391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334469946.mount: Deactivated successfully. May 17 00:23:33.257031 containerd[1490]: time="2025-05-17T00:23:33.256943126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:33.258532 containerd[1490]: time="2025-05-17T00:23:33.258452926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:23:33.258532 containerd[1490]: time="2025-05-17T00:23:33.258529931Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:33.259468 containerd[1490]: time="2025-05-17T00:23:33.259425871Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:33.260450 containerd[1490]: time="2025-05-17T00:23:33.260389969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" May 17 00:23:33.261999 containerd[1490]: time="2025-05-17T00:23:33.261831622Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:33.261999 containerd[1490]: time="2025-05-17T00:23:33.261943062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:23:33.264815 containerd[1490]: time="2025-05-17T00:23:33.264726019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:33.267843 containerd[1490]: time="2025-05-17T00:23:33.267699205Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 478.947216ms" May 17 00:23:33.270626 containerd[1490]: time="2025-05-17T00:23:33.269829820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 475.598077ms" May 17 00:23:33.272530 containerd[1490]: time="2025-05-17T00:23:33.272384851Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 496.135967ms" May 17 00:23:33.450082 kubelet[2341]: W0517 00:23:33.450016 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://37.27.213.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 37.27.213.195:6443: connect: connection refused May 17 00:23:33.450609 kubelet[2341]: E0517 00:23:33.450555 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://37.27.213.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 37.27.213.195:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:33.487122 containerd[1490]: time="2025-05-17T00:23:33.486838689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:33.487122 containerd[1490]: time="2025-05-17T00:23:33.486915814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:33.487122 containerd[1490]: time="2025-05-17T00:23:33.486942784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:33.487122 containerd[1490]: time="2025-05-17T00:23:33.487036160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:33.492787 containerd[1490]: time="2025-05-17T00:23:33.491661092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:33.492787 containerd[1490]: time="2025-05-17T00:23:33.491751942Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:33.492787 containerd[1490]: time="2025-05-17T00:23:33.491775226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:33.492787 containerd[1490]: time="2025-05-17T00:23:33.491884170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:33.499000 containerd[1490]: time="2025-05-17T00:23:33.498697857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:33.499000 containerd[1490]: time="2025-05-17T00:23:33.498770223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:33.499000 containerd[1490]: time="2025-05-17T00:23:33.498792565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:33.499000 containerd[1490]: time="2025-05-17T00:23:33.498895987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:33.524114 systemd[1]: Started cri-containerd-1a7f48ee223f53d8a02b041d6366ee1ad06d8df8184e6e20a56ff182b29a326f.scope - libcontainer container 1a7f48ee223f53d8a02b041d6366ee1ad06d8df8184e6e20a56ff182b29a326f. May 17 00:23:33.527071 systemd[1]: Started cri-containerd-273945487ea39fcf245f4283d93958d55a63af5fbd1b369132d446a82019f110.scope - libcontainer container 273945487ea39fcf245f4283d93958d55a63af5fbd1b369132d446a82019f110. May 17 00:23:33.531041 systemd[1]: Started cri-containerd-9c14c2421aa4c232b6e2de8d8bf90eed74b6e8baa6d7a92abf24e824e1161315.scope - libcontainer container 9c14c2421aa4c232b6e2de8d8bf90eed74b6e8baa6d7a92abf24e824e1161315. May 17 00:23:33.575017 containerd[1490]: time="2025-05-17T00:23:33.574946001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-3-n-decaff31fa,Uid:2e4324800664cc5ee36f8cc8229861c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"273945487ea39fcf245f4283d93958d55a63af5fbd1b369132d446a82019f110\"" May 17 00:23:33.581034 containerd[1490]: time="2025-05-17T00:23:33.581007969Z" level=info msg="CreateContainer within sandbox \"273945487ea39fcf245f4283d93958d55a63af5fbd1b369132d446a82019f110\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:23:33.593615 kubelet[2341]: W0517 00:23:33.592120 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://37.27.213.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 37.27.213.195:6443: connect: connection refused May 17 00:23:33.593615 kubelet[2341]: E0517 00:23:33.592233 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://37.27.213.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 37.27.213.195:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:33.596849 containerd[1490]: time="2025-05-17T00:23:33.596813255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-3-n-decaff31fa,Uid:6a36a1c06acc2504fe1c714cf46ad472,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c14c2421aa4c232b6e2de8d8bf90eed74b6e8baa6d7a92abf24e824e1161315\"" May 17 00:23:33.600199 containerd[1490]: time="2025-05-17T00:23:33.600166153Z" level=info msg="CreateContainer within sandbox \"9c14c2421aa4c232b6e2de8d8bf90eed74b6e8baa6d7a92abf24e824e1161315\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:23:33.611802 containerd[1490]: time="2025-05-17T00:23:33.611383155Z" level=info msg="CreateContainer within sandbox \"273945487ea39fcf245f4283d93958d55a63af5fbd1b369132d446a82019f110\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2ea3068ba186973ff282358001b1400e6bfbe489eb18064126ae6c76a1ca5067\"" May 17 00:23:33.615054 containerd[1490]: time="2025-05-17T00:23:33.615025246Z" level=info msg="StartContainer for \"2ea3068ba186973ff282358001b1400e6bfbe489eb18064126ae6c76a1ca5067\"" May 17 00:23:33.620565 containerd[1490]: time="2025-05-17T00:23:33.620530709Z" level=info msg="CreateContainer within sandbox \"9c14c2421aa4c232b6e2de8d8bf90eed74b6e8baa6d7a92abf24e824e1161315\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"396cf73c324ee352bf8ace0bb9e127078708c5d4c95daf969fc65968cb2ed3cf\"" May 17 00:23:33.622028 containerd[1490]: time="2025-05-17T00:23:33.622005644Z" level=info msg="StartContainer for \"396cf73c324ee352bf8ace0bb9e127078708c5d4c95daf969fc65968cb2ed3cf\"" May 17 00:23:33.627780 containerd[1490]: time="2025-05-17T00:23:33.627743995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-3-n-decaff31fa,Uid:dfa9ff6ace0fe83cf24a0c6bedb7b2a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a7f48ee223f53d8a02b041d6366ee1ad06d8df8184e6e20a56ff182b29a326f\"" May 17 00:23:33.630418 containerd[1490]: time="2025-05-17T00:23:33.630391819Z" level=info msg="CreateContainer within sandbox \"1a7f48ee223f53d8a02b041d6366ee1ad06d8df8184e6e20a56ff182b29a326f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:23:33.646867 containerd[1490]: time="2025-05-17T00:23:33.646801370Z" level=info msg="CreateContainer within sandbox \"1a7f48ee223f53d8a02b041d6366ee1ad06d8df8184e6e20a56ff182b29a326f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1abdcb0789e5cc15466f91710638e9a7c26752041aba087579877713064c08ad\"" May 17 00:23:33.647747 containerd[1490]: time="2025-05-17T00:23:33.647729109Z" level=info msg="StartContainer for \"1abdcb0789e5cc15466f91710638e9a7c26752041aba087579877713064c08ad\"" May 17 00:23:33.655780 systemd[1]: Started cri-containerd-396cf73c324ee352bf8ace0bb9e127078708c5d4c95daf969fc65968cb2ed3cf.scope - libcontainer container 396cf73c324ee352bf8ace0bb9e127078708c5d4c95daf969fc65968cb2ed3cf. May 17 00:23:33.669854 systemd[1]: Started cri-containerd-2ea3068ba186973ff282358001b1400e6bfbe489eb18064126ae6c76a1ca5067.scope - libcontainer container 2ea3068ba186973ff282358001b1400e6bfbe489eb18064126ae6c76a1ca5067. May 17 00:23:33.680745 systemd[1]: Started cri-containerd-1abdcb0789e5cc15466f91710638e9a7c26752041aba087579877713064c08ad.scope - libcontainer container 1abdcb0789e5cc15466f91710638e9a7c26752041aba087579877713064c08ad. May 17 00:23:33.712944 kubelet[2341]: W0517 00:23:33.712693 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://37.27.213.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-decaff31fa&limit=500&resourceVersion=0": dial tcp 37.27.213.195:6443: connect: connection refused May 17 00:23:33.712944 kubelet[2341]: E0517 00:23:33.712782 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://37.27.213.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-3-n-decaff31fa&limit=500&resourceVersion=0\": dial tcp 37.27.213.195:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:33.717446 containerd[1490]: time="2025-05-17T00:23:33.717349957Z" level=info msg="StartContainer for \"396cf73c324ee352bf8ace0bb9e127078708c5d4c95daf969fc65968cb2ed3cf\" returns successfully" May 17 00:23:33.728225 kubelet[2341]: E0517 00:23:33.728180 2341 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://37.27.213.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-3-n-decaff31fa?timeout=10s\": dial tcp 37.27.213.195:6443: connect: connection refused" interval="1.6s" May 17 00:23:33.742783 containerd[1490]: time="2025-05-17T00:23:33.742648655Z" level=info msg="StartContainer for \"2ea3068ba186973ff282358001b1400e6bfbe489eb18064126ae6c76a1ca5067\" returns successfully" May 17 00:23:33.749755 containerd[1490]: time="2025-05-17T00:23:33.749705448Z" level=info msg="StartContainer for \"1abdcb0789e5cc15466f91710638e9a7c26752041aba087579877713064c08ad\" returns successfully" May 17 00:23:33.793604 kubelet[2341]: W0517 00:23:33.791548 2341 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://37.27.213.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 37.27.213.195:6443: connect: connection refused May 17 00:23:33.793604 kubelet[2341]: E0517 00:23:33.792005 2341 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://37.27.213.195:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 37.27.213.195:6443: connect: connection refused" logger="UnhandledError" May 17 00:23:33.902815 kubelet[2341]: I0517 00:23:33.902778 2341 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:33.903738 kubelet[2341]: E0517 00:23:33.903712 2341 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://37.27.213.195:6443/api/v1/nodes\": dial tcp 37.27.213.195:6443: connect: connection refused" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:34.362509 kubelet[2341]: E0517 00:23:34.362472 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-decaff31fa\" not found" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:34.362885 kubelet[2341]: E0517 00:23:34.362636 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-decaff31fa\" not found" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:34.363978 kubelet[2341]: E0517 00:23:34.363954 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-decaff31fa\" not found" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:35.377413 kubelet[2341]: E0517 00:23:35.376949 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-decaff31fa\" not found" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:35.377413 kubelet[2341]: E0517 00:23:35.377317 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-decaff31fa\" not found" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:35.506512 kubelet[2341]: I0517 00:23:35.506283 2341 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:35.526027 kubelet[2341]: E0517 00:23:35.525994 2341 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-3-n-decaff31fa\" not found" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:35.646639 kubelet[2341]: I0517 00:23:35.646405 2341 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:35.647138 kubelet[2341]: E0517 00:23:35.646884 2341 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-3-n-decaff31fa\": node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:35.672853 kubelet[2341]: E0517 00:23:35.672791 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:35.773760 kubelet[2341]: E0517 00:23:35.773686 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:35.873914 kubelet[2341]: E0517 00:23:35.873828 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:35.974771 kubelet[2341]: E0517 00:23:35.974697 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:36.075289 kubelet[2341]: E0517 00:23:36.075215 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:36.176223 kubelet[2341]: E0517 00:23:36.176141 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:36.276845 kubelet[2341]: E0517 00:23:36.276652 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:36.371077 kubelet[2341]: E0517 00:23:36.370968 2341 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-3-n-decaff31fa\" not found" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:36.377365 kubelet[2341]: E0517 00:23:36.377272 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:36.477729 kubelet[2341]: E0517 00:23:36.477661 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:36.579107 kubelet[2341]: E0517 00:23:36.578845 2341 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-3-n-decaff31fa\" not found" May 17 00:23:36.625836 kubelet[2341]: I0517 00:23:36.625410 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" May 17 00:23:36.641545 kubelet[2341]: I0517 00:23:36.641486 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:36.651086 kubelet[2341]: I0517 00:23:36.650760 2341 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-decaff31fa" May 17 00:23:37.288493 kubelet[2341]: I0517 00:23:37.288414 2341 apiserver.go:52] "Watching apiserver" May 17 00:23:37.325980 kubelet[2341]: I0517 00:23:37.325860 2341 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:23:37.655986 systemd[1]: Reloading requested from client PID 2614 ('systemctl') (unit session-7.scope)... May 17 00:23:37.656003 systemd[1]: Reloading... May 17 00:23:37.729649 zram_generator::config[2650]: No configuration found. May 17 00:23:37.848862 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:37.938287 systemd[1]: Reloading finished in 281 ms. May 17 00:23:37.972502 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:37.989818 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:23:37.990000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:37.990037 systemd[1]: kubelet.service: Consumed 1.031s CPU time, 129.4M memory peak, 0B memory swap peak. May 17 00:23:37.997070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:38.167497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:38.176059 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:23:38.230678 kubelet[2705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:38.231010 kubelet[2705]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 17 00:23:38.231039 kubelet[2705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:38.231178 kubelet[2705]: I0517 00:23:38.231132 2705 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:23:38.243144 kubelet[2705]: I0517 00:23:38.243099 2705 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 17 00:23:38.243144 kubelet[2705]: I0517 00:23:38.243137 2705 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:23:38.243586 kubelet[2705]: I0517 00:23:38.243553 2705 server.go:954] "Client rotation is on, will bootstrap in background" May 17 00:23:38.245697 kubelet[2705]: I0517 00:23:38.245670 2705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:23:38.249398 kubelet[2705]: I0517 00:23:38.249263 2705 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:23:38.263604 kubelet[2705]: E0517 00:23:38.263550 2705 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:23:38.263604 kubelet[2705]: I0517 00:23:38.263607 2705 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:23:38.270982 kubelet[2705]: I0517 00:23:38.270946 2705 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:23:38.272373 kubelet[2705]: I0517 00:23:38.272330 2705 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:23:38.272625 kubelet[2705]: I0517 00:23:38.272369 2705 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-3-n-decaff31fa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:23:38.273474 kubelet[2705]: I0517 00:23:38.273448 2705 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:23:38.273474 kubelet[2705]: I0517 00:23:38.273472 2705 container_manager_linux.go:304] "Creating device plugin manager" May 17 00:23:38.273545 kubelet[2705]: I0517 00:23:38.273520 2705 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:38.274836 kubelet[2705]: I0517 00:23:38.273752 2705 kubelet.go:446] "Attempting to sync node with API server" May 17 00:23:38.274836 kubelet[2705]: I0517 00:23:38.273791 2705 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:23:38.274836 kubelet[2705]: I0517 00:23:38.273811 2705 kubelet.go:352] "Adding apiserver pod source" May 17 00:23:38.274836 kubelet[2705]: I0517 00:23:38.273826 2705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:23:38.275120 kubelet[2705]: I0517 00:23:38.275104 2705 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:23:38.275664 kubelet[2705]: I0517 00:23:38.275650 2705 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:23:38.276207 kubelet[2705]: I0517 00:23:38.276191 2705 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 17 00:23:38.276300 kubelet[2705]: I0517 00:23:38.276291 2705 server.go:1287] "Started kubelet" May 17 00:23:38.283673 kubelet[2705]: I0517 00:23:38.282275 2705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:23:38.283673 kubelet[2705]: I0517 00:23:38.282560 2705 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:23:38.286482 kubelet[2705]: I0517 00:23:38.285792 2705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:23:38.295614 kubelet[2705]: I0517 00:23:38.295456 2705 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:23:38.300945 kubelet[2705]: I0517 00:23:38.300911 2705 server.go:479] "Adding debug handlers to kubelet server" May 17 00:23:38.302082 kubelet[2705]: I0517 00:23:38.302062 2705 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:23:38.304429 kubelet[2705]: I0517 00:23:38.303695 2705 volume_manager.go:297] "Starting Kubelet Volume Manager" May 17 00:23:38.304429 kubelet[2705]: I0517 00:23:38.304117 2705 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 17 00:23:38.304429 kubelet[2705]: I0517 00:23:38.304196 2705 reconciler.go:26] "Reconciler: start to sync state" May 17 00:23:38.307007 kubelet[2705]: I0517 00:23:38.306979 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:23:38.307690 kubelet[2705]: I0517 00:23:38.307675 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:23:38.307737 kubelet[2705]: I0517 00:23:38.307701 2705 status_manager.go:227] "Starting to sync pod status with apiserver" May 17 00:23:38.307737 kubelet[2705]: I0517 00:23:38.307717 2705 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 17 00:23:38.307737 kubelet[2705]: I0517 00:23:38.307723 2705 kubelet.go:2382] "Starting kubelet main sync loop" May 17 00:23:38.307796 kubelet[2705]: E0517 00:23:38.307752 2705 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:23:38.308685 kubelet[2705]: I0517 00:23:38.308673 2705 factory.go:221] Registration of the systemd container factory successfully May 17 00:23:38.308969 kubelet[2705]: I0517 00:23:38.308955 2705 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:23:38.309847 kubelet[2705]: I0517 00:23:38.309838 2705 factory.go:221] Registration of the containerd container factory successfully May 17 00:23:38.313858 kubelet[2705]: E0517 00:23:38.313844 2705 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:23:38.351377 kubelet[2705]: I0517 00:23:38.351348 2705 cpu_manager.go:221] "Starting CPU manager" policy="none" May 17 00:23:38.351377 kubelet[2705]: I0517 00:23:38.351365 2705 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 17 00:23:38.351377 kubelet[2705]: I0517 00:23:38.351380 2705 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:38.351548 kubelet[2705]: I0517 00:23:38.351526 2705 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:23:38.351548 kubelet[2705]: I0517 00:23:38.351535 2705 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:23:38.351600 kubelet[2705]: I0517 00:23:38.351552 2705 policy_none.go:49] "None policy: Start" May 17 00:23:38.351600 kubelet[2705]: I0517 00:23:38.351562 2705 memory_manager.go:186] "Starting memorymanager" policy="None" May 17 00:23:38.351600 kubelet[2705]: I0517 00:23:38.351570 2705 state_mem.go:35] "Initializing new in-memory state store" May 17 00:23:38.351707 kubelet[2705]: I0517 00:23:38.351690 2705 state_mem.go:75] "Updated machine memory state" May 17 00:23:38.354764 kubelet[2705]: I0517 00:23:38.354745 2705 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:23:38.354905 kubelet[2705]: I0517 00:23:38.354880 2705 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:23:38.354933 kubelet[2705]: I0517 00:23:38.354892 2705 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:23:38.355788 kubelet[2705]: I0517 00:23:38.355367 2705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:23:38.358101 kubelet[2705]: E0517 00:23:38.357425 2705 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 17 00:23:38.409854 kubelet[2705]: I0517 00:23:38.409374 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.411745 kubelet[2705]: I0517 00:23:38.411719 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.411857 kubelet[2705]: I0517 00:23:38.411666 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.418337 kubelet[2705]: E0517 00:23:38.418208 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-decaff31fa\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.419152 kubelet[2705]: E0517 00:23:38.419112 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.419860 kubelet[2705]: E0517 00:23:38.419772 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-3-n-decaff31fa\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.460437 kubelet[2705]: I0517 00:23:38.460405 2705 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:38.469608 kubelet[2705]: I0517 00:23:38.469540 2705 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:38.469799 kubelet[2705]: I0517 00:23:38.469662 2705 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-3-n-decaff31fa" May 17 00:23:38.605976 kubelet[2705]: I0517 00:23:38.605792 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e4324800664cc5ee36f8cc8229861c8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-3-n-decaff31fa\" (UID: \"2e4324800664cc5ee36f8cc8229861c8\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.605976 kubelet[2705]: I0517 00:23:38.605852 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6a36a1c06acc2504fe1c714cf46ad472-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" (UID: \"6a36a1c06acc2504fe1c714cf46ad472\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.605976 kubelet[2705]: I0517 00:23:38.605919 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a36a1c06acc2504fe1c714cf46ad472-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" (UID: \"6a36a1c06acc2504fe1c714cf46ad472\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.605976 kubelet[2705]: I0517 00:23:38.605949 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6a36a1c06acc2504fe1c714cf46ad472-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" (UID: \"6a36a1c06acc2504fe1c714cf46ad472\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.606246 kubelet[2705]: I0517 00:23:38.605978 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dfa9ff6ace0fe83cf24a0c6bedb7b2a5-kubeconfig\") pod \"kube-scheduler-ci-4081-3-3-n-decaff31fa\" (UID: \"dfa9ff6ace0fe83cf24a0c6bedb7b2a5\") " pod="kube-system/kube-scheduler-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.606246 kubelet[2705]: I0517 00:23:38.606006 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e4324800664cc5ee36f8cc8229861c8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-3-n-decaff31fa\" (UID: \"2e4324800664cc5ee36f8cc8229861c8\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.606246 kubelet[2705]: I0517 00:23:38.606034 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6a36a1c06acc2504fe1c714cf46ad472-ca-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" (UID: \"6a36a1c06acc2504fe1c714cf46ad472\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.606246 kubelet[2705]: I0517 00:23:38.606060 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6a36a1c06acc2504fe1c714cf46ad472-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" (UID: \"6a36a1c06acc2504fe1c714cf46ad472\") " pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.606246 kubelet[2705]: I0517 00:23:38.606087 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e4324800664cc5ee36f8cc8229861c8-ca-certs\") pod \"kube-apiserver-ci-4081-3-3-n-decaff31fa\" (UID: \"2e4324800664cc5ee36f8cc8229861c8\") " pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" May 17 00:23:38.677051 sudo[2739]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 17 00:23:38.677536 sudo[2739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 17 00:23:39.229508 sudo[2739]: pam_unix(sudo:session): session closed for user root May 17 00:23:39.280110 kubelet[2705]: I0517 00:23:39.280061 2705 apiserver.go:52] "Watching apiserver" May 17 00:23:39.305220 kubelet[2705]: I0517 00:23:39.305155 2705 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 17 00:23:39.346994 kubelet[2705]: I0517 00:23:39.346962 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" May 17 00:23:39.348862 kubelet[2705]: I0517 00:23:39.348842 2705 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:39.375487 kubelet[2705]: E0517 00:23:39.375451 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-3-n-decaff31fa\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" May 17 00:23:39.377475 kubelet[2705]: E0517 00:23:39.377451 2705 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-3-n-decaff31fa\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" May 17 00:23:39.377994 kubelet[2705]: I0517 00:23:39.377945 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-3-n-decaff31fa" podStartSLOduration=3.377918956 podStartE2EDuration="3.377918956s" podCreationTimestamp="2025-05-17 00:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:39.37771388 +0000 UTC m=+1.196304895" watchObservedRunningTime="2025-05-17 00:23:39.377918956 +0000 UTC m=+1.196509969" May 17 00:23:39.407170 kubelet[2705]: I0517 00:23:39.407113 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-3-n-decaff31fa" podStartSLOduration=3.407096999 podStartE2EDuration="3.407096999s" podCreationTimestamp="2025-05-17 00:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:39.393362755 +0000 UTC m=+1.211953779" watchObservedRunningTime="2025-05-17 00:23:39.407096999 +0000 UTC m=+1.225688004" May 17 00:23:39.425454 kubelet[2705]: I0517 00:23:39.425408 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-3-n-decaff31fa" podStartSLOduration=3.425392848 podStartE2EDuration="3.425392848s" podCreationTimestamp="2025-05-17 00:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:39.407683139 +0000 UTC m=+1.226274153" watchObservedRunningTime="2025-05-17 00:23:39.425392848 +0000 UTC m=+1.243983863" May 17 00:23:41.027641 sudo[1857]: pam_unix(sudo:session): session closed for user root May 17 00:23:41.186041 sshd[1854]: pam_unix(sshd:session): session closed for user core May 17 00:23:41.190362 systemd[1]: sshd@6-37.27.213.195:22-139.178.89.65:44586.service: Deactivated successfully. May 17 00:23:41.193531 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:23:41.194104 systemd[1]: session-7.scope: Consumed 6.196s CPU time, 144.4M memory peak, 0B memory swap peak. May 17 00:23:41.199232 systemd-logind[1474]: Session 7 logged out. Waiting for processes to exit. May 17 00:23:41.202661 systemd-logind[1474]: Removed session 7. May 17 00:23:43.913378 kubelet[2705]: I0517 00:23:43.913052 2705 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:23:43.914631 containerd[1490]: time="2025-05-17T00:23:43.914444673Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:23:43.915154 kubelet[2705]: I0517 00:23:43.914842 2705 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:23:44.644114 kubelet[2705]: I0517 00:23:44.643840 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db8122d7-503b-47c1-b58a-2e2188ea559b-xtables-lock\") pod \"kube-proxy-7kf49\" (UID: \"db8122d7-503b-47c1-b58a-2e2188ea559b\") " pod="kube-system/kube-proxy-7kf49" May 17 00:23:44.644114 kubelet[2705]: I0517 00:23:44.643932 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db8122d7-503b-47c1-b58a-2e2188ea559b-lib-modules\") pod \"kube-proxy-7kf49\" (UID: \"db8122d7-503b-47c1-b58a-2e2188ea559b\") " pod="kube-system/kube-proxy-7kf49" May 17 00:23:44.644114 kubelet[2705]: I0517 00:23:44.643963 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftxwf\" (UniqueName: \"kubernetes.io/projected/db8122d7-503b-47c1-b58a-2e2188ea559b-kube-api-access-ftxwf\") pod \"kube-proxy-7kf49\" (UID: \"db8122d7-503b-47c1-b58a-2e2188ea559b\") " pod="kube-system/kube-proxy-7kf49" May 17 00:23:44.644114 kubelet[2705]: I0517 00:23:44.644031 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db8122d7-503b-47c1-b58a-2e2188ea559b-kube-proxy\") pod \"kube-proxy-7kf49\" (UID: \"db8122d7-503b-47c1-b58a-2e2188ea559b\") " pod="kube-system/kube-proxy-7kf49" May 17 00:23:44.656672 systemd[1]: Created slice kubepods-besteffort-poddb8122d7_503b_47c1_b58a_2e2188ea559b.slice - libcontainer container kubepods-besteffort-poddb8122d7_503b_47c1_b58a_2e2188ea559b.slice. May 17 00:23:44.678944 systemd[1]: Created slice kubepods-burstable-podacc1aca0_af82_4917_a4bb_9afb519fff17.slice - libcontainer container kubepods-burstable-podacc1aca0_af82_4917_a4bb_9afb519fff17.slice. May 17 00:23:44.746175 kubelet[2705]: I0517 00:23:44.744495 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cni-path\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.746175 kubelet[2705]: I0517 00:23:44.744570 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-lib-modules\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.746175 kubelet[2705]: I0517 00:23:44.744645 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-xtables-lock\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.746175 kubelet[2705]: I0517 00:23:44.744701 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acc1aca0-af82-4917-a4bb-9afb519fff17-clustermesh-secrets\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.746175 kubelet[2705]: I0517 00:23:44.744743 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-config-path\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.746175 kubelet[2705]: I0517 00:23:44.744784 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-bpf-maps\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.749023 kubelet[2705]: I0517 00:23:44.744912 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-hostproc\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.749023 kubelet[2705]: I0517 00:23:44.744957 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25jnm\" (UniqueName: \"kubernetes.io/projected/acc1aca0-af82-4917-a4bb-9afb519fff17-kube-api-access-25jnm\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.749023 kubelet[2705]: I0517 00:23:44.745027 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-cgroup\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.749023 kubelet[2705]: I0517 00:23:44.745074 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acc1aca0-af82-4917-a4bb-9afb519fff17-hubble-tls\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.749023 kubelet[2705]: I0517 00:23:44.745123 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-host-proc-sys-net\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.749023 kubelet[2705]: I0517 00:23:44.745240 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-etc-cni-netd\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.749492 kubelet[2705]: I0517 00:23:44.745359 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-host-proc-sys-kernel\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.749492 kubelet[2705]: I0517 00:23:44.745473 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-run\") pod \"cilium-xqz9z\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " pod="kube-system/cilium-xqz9z" May 17 00:23:44.972870 containerd[1490]: time="2025-05-17T00:23:44.972818889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7kf49,Uid:db8122d7-503b-47c1-b58a-2e2188ea559b,Namespace:kube-system,Attempt:0,}" May 17 00:23:44.986073 containerd[1490]: time="2025-05-17T00:23:44.985529004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqz9z,Uid:acc1aca0-af82-4917-a4bb-9afb519fff17,Namespace:kube-system,Attempt:0,}" May 17 00:23:45.013749 containerd[1490]: time="2025-05-17T00:23:45.012842934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:45.014749 containerd[1490]: time="2025-05-17T00:23:45.013758542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:45.014749 containerd[1490]: time="2025-05-17T00:23:45.013788137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.014749 containerd[1490]: time="2025-05-17T00:23:45.014110551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.050531 systemd[1]: Started cri-containerd-cc6d8d6c848954bdf337f1bde942d73852cf7ead68fa1a0ec1f7db1294efce79.scope - libcontainer container cc6d8d6c848954bdf337f1bde942d73852cf7ead68fa1a0ec1f7db1294efce79. May 17 00:23:45.061934 containerd[1490]: time="2025-05-17T00:23:45.061376708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:45.061934 containerd[1490]: time="2025-05-17T00:23:45.061610316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:45.061934 containerd[1490]: time="2025-05-17T00:23:45.061780345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.066119 containerd[1490]: time="2025-05-17T00:23:45.062240898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.065499 systemd[1]: Created slice kubepods-besteffort-pod3b6cd7cf_7db6_4dc1_bfba_2e9d1126b65b.slice - libcontainer container kubepods-besteffort-pod3b6cd7cf_7db6_4dc1_bfba_2e9d1126b65b.slice. May 17 00:23:45.068606 kubelet[2705]: I0517 00:23:45.068545 2705 status_manager.go:890] "Failed to get status for pod" podUID="3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b" pod="kube-system/cilium-operator-6c4d7847fc-lh227" err="pods \"cilium-operator-6c4d7847fc-lh227\" is forbidden: User \"system:node:ci-4081-3-3-n-decaff31fa\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-n-decaff31fa' and this object" May 17 00:23:45.082899 systemd[1]: Started cri-containerd-650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf.scope - libcontainer container 650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf. May 17 00:23:45.103865 containerd[1490]: time="2025-05-17T00:23:45.103827313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7kf49,Uid:db8122d7-503b-47c1-b58a-2e2188ea559b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc6d8d6c848954bdf337f1bde942d73852cf7ead68fa1a0ec1f7db1294efce79\"" May 17 00:23:45.107386 containerd[1490]: time="2025-05-17T00:23:45.107343457Z" level=info msg="CreateContainer within sandbox \"cc6d8d6c848954bdf337f1bde942d73852cf7ead68fa1a0ec1f7db1294efce79\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:23:45.127144 containerd[1490]: time="2025-05-17T00:23:45.127054490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xqz9z,Uid:acc1aca0-af82-4917-a4bb-9afb519fff17,Namespace:kube-system,Attempt:0,} returns sandbox id \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\"" May 17 00:23:45.127144 containerd[1490]: time="2025-05-17T00:23:45.127111447Z" level=info msg="CreateContainer within sandbox \"cc6d8d6c848954bdf337f1bde942d73852cf7ead68fa1a0ec1f7db1294efce79\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"45f1dd0a6547b5925e252320a88941ee37e8f02e371ea192175c13006f2d5060\"" May 17 00:23:45.128793 containerd[1490]: time="2025-05-17T00:23:45.128768505Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 17 00:23:45.128966 containerd[1490]: time="2025-05-17T00:23:45.128940698Z" level=info msg="StartContainer for \"45f1dd0a6547b5925e252320a88941ee37e8f02e371ea192175c13006f2d5060\"" May 17 00:23:45.149905 kubelet[2705]: I0517 00:23:45.149626 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5cxq\" (UniqueName: \"kubernetes.io/projected/3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b-kube-api-access-p5cxq\") pod \"cilium-operator-6c4d7847fc-lh227\" (UID: \"3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b\") " pod="kube-system/cilium-operator-6c4d7847fc-lh227" May 17 00:23:45.149905 kubelet[2705]: I0517 00:23:45.149658 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lh227\" (UID: \"3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b\") " pod="kube-system/cilium-operator-6c4d7847fc-lh227" May 17 00:23:45.149724 systemd[1]: Started cri-containerd-45f1dd0a6547b5925e252320a88941ee37e8f02e371ea192175c13006f2d5060.scope - libcontainer container 45f1dd0a6547b5925e252320a88941ee37e8f02e371ea192175c13006f2d5060. May 17 00:23:45.172483 containerd[1490]: time="2025-05-17T00:23:45.172444840Z" level=info msg="StartContainer for \"45f1dd0a6547b5925e252320a88941ee37e8f02e371ea192175c13006f2d5060\" returns successfully" May 17 00:23:45.372291 containerd[1490]: time="2025-05-17T00:23:45.371256673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lh227,Uid:3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b,Namespace:kube-system,Attempt:0,}" May 17 00:23:45.399688 kubelet[2705]: I0517 00:23:45.399621 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7kf49" podStartSLOduration=1.3995748080000001 podStartE2EDuration="1.399574808s" podCreationTimestamp="2025-05-17 00:23:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:23:45.380531285 +0000 UTC m=+7.199122340" watchObservedRunningTime="2025-05-17 00:23:45.399574808 +0000 UTC m=+7.218165841" May 17 00:23:45.417184 containerd[1490]: time="2025-05-17T00:23:45.417020132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:45.417557 containerd[1490]: time="2025-05-17T00:23:45.417196122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:45.417557 containerd[1490]: time="2025-05-17T00:23:45.417230276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.417557 containerd[1490]: time="2025-05-17T00:23:45.417411566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:45.445078 systemd[1]: Started cri-containerd-2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466.scope - libcontainer container 2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466. May 17 00:23:45.496761 containerd[1490]: time="2025-05-17T00:23:45.496573846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lh227,Uid:3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466\"" May 17 00:23:49.363647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3089355285.mount: Deactivated successfully. May 17 00:23:50.942300 containerd[1490]: time="2025-05-17T00:23:50.942229755Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:50.946636 containerd[1490]: time="2025-05-17T00:23:50.945051576Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:50.946636 containerd[1490]: time="2025-05-17T00:23:50.945262953Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" May 17 00:23:50.950299 containerd[1490]: time="2025-05-17T00:23:50.950242952Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.82143819s" May 17 00:23:50.950368 containerd[1490]: time="2025-05-17T00:23:50.950308786Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 17 00:23:50.953807 containerd[1490]: time="2025-05-17T00:23:50.953776759Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 17 00:23:50.954757 containerd[1490]: time="2025-05-17T00:23:50.954729656Z" level=info msg="CreateContainer within sandbox \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:23:51.101031 containerd[1490]: time="2025-05-17T00:23:51.100949654Z" level=info msg="CreateContainer within sandbox \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85\"" May 17 00:23:51.101842 containerd[1490]: time="2025-05-17T00:23:51.101677378Z" level=info msg="StartContainer for \"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85\"" May 17 00:23:51.259500 systemd[1]: run-containerd-runc-k8s.io-69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85-runc.glvGjT.mount: Deactivated successfully. May 17 00:23:51.270094 systemd[1]: Started cri-containerd-69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85.scope - libcontainer container 69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85. May 17 00:23:51.302987 containerd[1490]: time="2025-05-17T00:23:51.302940168Z" level=info msg="StartContainer for \"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85\" returns successfully" May 17 00:23:51.316939 systemd[1]: cri-containerd-69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85.scope: Deactivated successfully. May 17 00:23:51.491143 containerd[1490]: time="2025-05-17T00:23:51.476636123Z" level=info msg="shim disconnected" id=69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85 namespace=k8s.io May 17 00:23:51.491143 containerd[1490]: time="2025-05-17T00:23:51.491120014Z" level=warning msg="cleaning up after shim disconnected" id=69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85 namespace=k8s.io May 17 00:23:51.491143 containerd[1490]: time="2025-05-17T00:23:51.491140442Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:23:52.091425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85-rootfs.mount: Deactivated successfully. May 17 00:23:52.414203 containerd[1490]: time="2025-05-17T00:23:52.414095543Z" level=info msg="CreateContainer within sandbox \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:23:52.471531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1733445613.mount: Deactivated successfully. May 17 00:23:52.475663 containerd[1490]: time="2025-05-17T00:23:52.475615361Z" level=info msg="CreateContainer within sandbox \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0\"" May 17 00:23:52.477441 containerd[1490]: time="2025-05-17T00:23:52.477405299Z" level=info msg="StartContainer for \"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0\"" May 17 00:23:52.530008 systemd[1]: Started cri-containerd-bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0.scope - libcontainer container bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0. May 17 00:23:52.558720 containerd[1490]: time="2025-05-17T00:23:52.558545913Z" level=info msg="StartContainer for \"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0\" returns successfully" May 17 00:23:52.571740 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:23:52.572371 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:23:52.572432 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 17 00:23:52.579069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:23:52.580868 systemd[1]: cri-containerd-bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0.scope: Deactivated successfully. May 17 00:23:52.622473 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:23:52.625076 containerd[1490]: time="2025-05-17T00:23:52.625008952Z" level=info msg="shim disconnected" id=bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0 namespace=k8s.io May 17 00:23:52.625076 containerd[1490]: time="2025-05-17T00:23:52.625058485Z" level=warning msg="cleaning up after shim disconnected" id=bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0 namespace=k8s.io May 17 00:23:52.625076 containerd[1490]: time="2025-05-17T00:23:52.625065528Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:23:52.639694 containerd[1490]: time="2025-05-17T00:23:52.638841502Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:23:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:23:52.979279 containerd[1490]: time="2025-05-17T00:23:52.979208699Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:52.980317 containerd[1490]: time="2025-05-17T00:23:52.980109660Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" May 17 00:23:52.982572 containerd[1490]: time="2025-05-17T00:23:52.981339706Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:23:52.982572 containerd[1490]: time="2025-05-17T00:23:52.982446481Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.028379908s" May 17 00:23:52.982572 containerd[1490]: time="2025-05-17T00:23:52.982482799Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 17 00:23:52.989543 containerd[1490]: time="2025-05-17T00:23:52.989487996Z" level=info msg="CreateContainer within sandbox \"2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 17 00:23:53.012019 containerd[1490]: time="2025-05-17T00:23:53.011937256Z" level=info msg="CreateContainer within sandbox \"2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\"" May 17 00:23:53.014570 containerd[1490]: time="2025-05-17T00:23:53.014494792Z" level=info msg="StartContainer for \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\"" May 17 00:23:53.050816 systemd[1]: Started cri-containerd-b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416.scope - libcontainer container b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416. May 17 00:23:53.082358 containerd[1490]: time="2025-05-17T00:23:53.082269712Z" level=info msg="StartContainer for \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\" returns successfully" May 17 00:23:53.095221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0-rootfs.mount: Deactivated successfully. May 17 00:23:53.422464 containerd[1490]: time="2025-05-17T00:23:53.422396601Z" level=info msg="CreateContainer within sandbox \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:23:53.456978 containerd[1490]: time="2025-05-17T00:23:53.456938958Z" level=info msg="CreateContainer within sandbox \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362\"" May 17 00:23:53.458057 containerd[1490]: time="2025-05-17T00:23:53.458039202Z" level=info msg="StartContainer for \"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362\"" May 17 00:23:53.493342 systemd[1]: run-containerd-runc-k8s.io-0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362-runc.apso9j.mount: Deactivated successfully. May 17 00:23:53.505692 systemd[1]: Started cri-containerd-0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362.scope - libcontainer container 0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362. May 17 00:23:53.581134 kubelet[2705]: I0517 00:23:53.581066 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lh227" podStartSLOduration=1.095197519 podStartE2EDuration="8.581048624s" podCreationTimestamp="2025-05-17 00:23:45 +0000 UTC" firstStartedPulling="2025-05-17 00:23:45.497855178 +0000 UTC m=+7.316446193" lastFinishedPulling="2025-05-17 00:23:52.983706284 +0000 UTC m=+14.802297298" observedRunningTime="2025-05-17 00:23:53.519946458 +0000 UTC m=+15.338537473" watchObservedRunningTime="2025-05-17 00:23:53.581048624 +0000 UTC m=+15.399639639" May 17 00:23:53.592748 containerd[1490]: time="2025-05-17T00:23:53.592453633Z" level=info msg="StartContainer for \"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362\" returns successfully" May 17 00:23:53.612518 systemd[1]: cri-containerd-0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362.scope: Deactivated successfully. May 17 00:23:53.682815 containerd[1490]: time="2025-05-17T00:23:53.682474922Z" level=info msg="shim disconnected" id=0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362 namespace=k8s.io May 17 00:23:53.682815 containerd[1490]: time="2025-05-17T00:23:53.682539292Z" level=warning msg="cleaning up after shim disconnected" id=0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362 namespace=k8s.io May 17 00:23:53.682815 containerd[1490]: time="2025-05-17T00:23:53.682549101Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:23:54.090936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362-rootfs.mount: Deactivated successfully. May 17 00:23:54.441460 containerd[1490]: time="2025-05-17T00:23:54.441282424Z" level=info msg="CreateContainer within sandbox \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:23:54.480025 containerd[1490]: time="2025-05-17T00:23:54.479945682Z" level=info msg="CreateContainer within sandbox \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c\"" May 17 00:23:54.482978 containerd[1490]: time="2025-05-17T00:23:54.481159147Z" level=info msg="StartContainer for \"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c\"" May 17 00:23:54.522798 systemd[1]: Started cri-containerd-2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c.scope - libcontainer container 2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c. May 17 00:23:54.551282 systemd[1]: cri-containerd-2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c.scope: Deactivated successfully. May 17 00:23:54.552405 containerd[1490]: time="2025-05-17T00:23:54.552302475Z" level=info msg="StartContainer for \"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c\" returns successfully" May 17 00:23:54.576746 containerd[1490]: time="2025-05-17T00:23:54.576664962Z" level=info msg="shim disconnected" id=2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c namespace=k8s.io May 17 00:23:54.576746 containerd[1490]: time="2025-05-17T00:23:54.576730565Z" level=warning msg="cleaning up after shim disconnected" id=2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c namespace=k8s.io May 17 00:23:54.576746 containerd[1490]: time="2025-05-17T00:23:54.576744572Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:23:54.596952 containerd[1490]: time="2025-05-17T00:23:54.596842302Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:23:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:23:55.094776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c-rootfs.mount: Deactivated successfully. May 17 00:23:55.436882 containerd[1490]: time="2025-05-17T00:23:55.436797369Z" level=info msg="CreateContainer within sandbox \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:23:55.466252 containerd[1490]: time="2025-05-17T00:23:55.466169832Z" level=info msg="CreateContainer within sandbox \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\"" May 17 00:23:55.470166 containerd[1490]: time="2025-05-17T00:23:55.470092227Z" level=info msg="StartContainer for \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\"" May 17 00:23:55.513697 systemd[1]: run-containerd-runc-k8s.io-d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2-runc.9QbsZN.mount: Deactivated successfully. May 17 00:23:55.521819 systemd[1]: Started cri-containerd-d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2.scope - libcontainer container d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2. May 17 00:23:55.564440 containerd[1490]: time="2025-05-17T00:23:55.564379908Z" level=info msg="StartContainer for \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\" returns successfully" May 17 00:23:55.752541 kubelet[2705]: I0517 00:23:55.751784 2705 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 17 00:23:55.813368 kubelet[2705]: I0517 00:23:55.812764 2705 status_manager.go:890] "Failed to get status for pod" podUID="86a1febd-ff95-4ad6-a534-4e1ed4dbce36" pod="kube-system/coredns-668d6bf9bc-tcjjf" err="pods \"coredns-668d6bf9bc-tcjjf\" is forbidden: User \"system:node:ci-4081-3-3-n-decaff31fa\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-n-decaff31fa' and this object" May 17 00:23:55.816899 kubelet[2705]: W0517 00:23:55.815982 2705 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-3-n-decaff31fa" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-3-n-decaff31fa' and this object May 17 00:23:55.820751 kubelet[2705]: E0517 00:23:55.820674 2705 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-3-n-decaff31fa\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-3-n-decaff31fa' and this object" logger="UnhandledError" May 17 00:23:55.822451 systemd[1]: Created slice kubepods-burstable-pod86a1febd_ff95_4ad6_a534_4e1ed4dbce36.slice - libcontainer container kubepods-burstable-pod86a1febd_ff95_4ad6_a534_4e1ed4dbce36.slice. May 17 00:23:55.834121 kubelet[2705]: I0517 00:23:55.834050 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlggv\" (UniqueName: \"kubernetes.io/projected/0ed5f4cb-c49e-4888-940c-175ecaec97b4-kube-api-access-xlggv\") pod \"coredns-668d6bf9bc-htnp4\" (UID: \"0ed5f4cb-c49e-4888-940c-175ecaec97b4\") " pod="kube-system/coredns-668d6bf9bc-htnp4" May 17 00:23:55.834434 kubelet[2705]: I0517 00:23:55.834410 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ed5f4cb-c49e-4888-940c-175ecaec97b4-config-volume\") pod \"coredns-668d6bf9bc-htnp4\" (UID: \"0ed5f4cb-c49e-4888-940c-175ecaec97b4\") " pod="kube-system/coredns-668d6bf9bc-htnp4" May 17 00:23:55.834476 kubelet[2705]: I0517 00:23:55.834456 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86a1febd-ff95-4ad6-a534-4e1ed4dbce36-config-volume\") pod \"coredns-668d6bf9bc-tcjjf\" (UID: \"86a1febd-ff95-4ad6-a534-4e1ed4dbce36\") " pod="kube-system/coredns-668d6bf9bc-tcjjf" May 17 00:23:55.834505 kubelet[2705]: I0517 00:23:55.834486 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frmm2\" (UniqueName: \"kubernetes.io/projected/86a1febd-ff95-4ad6-a534-4e1ed4dbce36-kube-api-access-frmm2\") pod \"coredns-668d6bf9bc-tcjjf\" (UID: \"86a1febd-ff95-4ad6-a534-4e1ed4dbce36\") " pod="kube-system/coredns-668d6bf9bc-tcjjf" May 17 00:23:55.835488 systemd[1]: Created slice kubepods-burstable-pod0ed5f4cb_c49e_4888_940c_175ecaec97b4.slice - libcontainer container kubepods-burstable-pod0ed5f4cb_c49e_4888_940c_175ecaec97b4.slice. May 17 00:23:56.468412 kubelet[2705]: I0517 00:23:56.468316 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xqz9z" podStartSLOduration=6.643864487 podStartE2EDuration="12.468289538s" podCreationTimestamp="2025-05-17 00:23:44 +0000 UTC" firstStartedPulling="2025-05-17 00:23:45.128390306 +0000 UTC m=+6.946981320" lastFinishedPulling="2025-05-17 00:23:50.952815336 +0000 UTC m=+12.771406371" observedRunningTime="2025-05-17 00:23:56.467421029 +0000 UTC m=+18.286012073" watchObservedRunningTime="2025-05-17 00:23:56.468289538 +0000 UTC m=+18.286880552" May 17 00:23:56.730988 containerd[1490]: time="2025-05-17T00:23:56.730489073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tcjjf,Uid:86a1febd-ff95-4ad6-a534-4e1ed4dbce36,Namespace:kube-system,Attempt:0,}" May 17 00:23:56.740906 containerd[1490]: time="2025-05-17T00:23:56.740441847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-htnp4,Uid:0ed5f4cb-c49e-4888-940c-175ecaec97b4,Namespace:kube-system,Attempt:0,}" May 17 00:23:57.415522 systemd-networkd[1395]: cilium_host: Link UP May 17 00:23:57.418863 systemd-networkd[1395]: cilium_net: Link UP May 17 00:23:57.419184 systemd-networkd[1395]: cilium_net: Gained carrier May 17 00:23:57.419416 systemd-networkd[1395]: cilium_host: Gained carrier May 17 00:23:57.456698 systemd-networkd[1395]: cilium_host: Gained IPv6LL May 17 00:23:57.567798 systemd-networkd[1395]: cilium_vxlan: Link UP May 17 00:23:57.568003 systemd-networkd[1395]: cilium_vxlan: Gained carrier May 17 00:23:57.694781 systemd-networkd[1395]: cilium_net: Gained IPv6LL May 17 00:23:57.979676 kernel: NET: Registered PF_ALG protocol family May 17 00:23:58.686907 systemd-networkd[1395]: lxc_health: Link UP May 17 00:23:58.699506 systemd-networkd[1395]: lxc_health: Gained carrier May 17 00:23:58.841566 systemd-networkd[1395]: lxc998d3e992068: Link UP May 17 00:23:58.847982 kernel: eth0: renamed from tmpd803f May 17 00:23:58.853428 systemd-networkd[1395]: lxc998d3e992068: Gained carrier May 17 00:23:58.858380 systemd-networkd[1395]: lxc79d9b8f4a110: Link UP May 17 00:23:58.868248 kernel: eth0: renamed from tmp6116b May 17 00:23:58.875031 systemd-networkd[1395]: lxc79d9b8f4a110: Gained carrier May 17 00:23:59.031778 systemd-networkd[1395]: cilium_vxlan: Gained IPv6LL May 17 00:24:00.438916 systemd-networkd[1395]: lxc998d3e992068: Gained IPv6LL May 17 00:24:00.503092 systemd-networkd[1395]: lxc_health: Gained IPv6LL May 17 00:24:00.567077 systemd-networkd[1395]: lxc79d9b8f4a110: Gained IPv6LL May 17 00:24:02.535648 containerd[1490]: time="2025-05-17T00:24:02.535150823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:02.536062 containerd[1490]: time="2025-05-17T00:24:02.536021746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:02.536252 containerd[1490]: time="2025-05-17T00:24:02.536141471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:02.536741 containerd[1490]: time="2025-05-17T00:24:02.536704076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:02.573064 containerd[1490]: time="2025-05-17T00:24:02.571885464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:02.573064 containerd[1490]: time="2025-05-17T00:24:02.572657062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:02.573064 containerd[1490]: time="2025-05-17T00:24:02.572674595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:02.573064 containerd[1490]: time="2025-05-17T00:24:02.572758212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:02.576171 systemd[1]: run-containerd-runc-k8s.io-d803f5664efa5150f45ca83043c6819ea7530e9b95c493b3fb660f1ada8c3719-runc.mDyRkO.mount: Deactivated successfully. May 17 00:24:02.595752 systemd[1]: Started cri-containerd-d803f5664efa5150f45ca83043c6819ea7530e9b95c493b3fb660f1ada8c3719.scope - libcontainer container d803f5664efa5150f45ca83043c6819ea7530e9b95c493b3fb660f1ada8c3719. May 17 00:24:02.612863 systemd[1]: Started cri-containerd-6116bf2862b88ac052a10271dc34608b3dd13fecb5c28b1db759a8cbe8568e72.scope - libcontainer container 6116bf2862b88ac052a10271dc34608b3dd13fecb5c28b1db759a8cbe8568e72. May 17 00:24:02.672431 containerd[1490]: time="2025-05-17T00:24:02.672387865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-htnp4,Uid:0ed5f4cb-c49e-4888-940c-175ecaec97b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6116bf2862b88ac052a10271dc34608b3dd13fecb5c28b1db759a8cbe8568e72\"" May 17 00:24:02.676552 containerd[1490]: time="2025-05-17T00:24:02.676519974Z" level=info msg="CreateContainer within sandbox \"6116bf2862b88ac052a10271dc34608b3dd13fecb5c28b1db759a8cbe8568e72\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:24:02.717477 containerd[1490]: time="2025-05-17T00:24:02.716411585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tcjjf,Uid:86a1febd-ff95-4ad6-a534-4e1ed4dbce36,Namespace:kube-system,Attempt:0,} returns sandbox id \"d803f5664efa5150f45ca83043c6819ea7530e9b95c493b3fb660f1ada8c3719\"" May 17 00:24:02.718199 containerd[1490]: time="2025-05-17T00:24:02.718172519Z" level=info msg="CreateContainer within sandbox \"6116bf2862b88ac052a10271dc34608b3dd13fecb5c28b1db759a8cbe8568e72\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fdabb785c0791603178dd744bddaafa3b10abfcd5e71736ccfa02ee689778cb0\"" May 17 00:24:02.720377 containerd[1490]: time="2025-05-17T00:24:02.720350863Z" level=info msg="StartContainer for \"fdabb785c0791603178dd744bddaafa3b10abfcd5e71736ccfa02ee689778cb0\"" May 17 00:24:02.724171 containerd[1490]: time="2025-05-17T00:24:02.724135260Z" level=info msg="CreateContainer within sandbox \"d803f5664efa5150f45ca83043c6819ea7530e9b95c493b3fb660f1ada8c3719\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:24:02.733799 containerd[1490]: time="2025-05-17T00:24:02.733754029Z" level=info msg="CreateContainer within sandbox \"d803f5664efa5150f45ca83043c6819ea7530e9b95c493b3fb660f1ada8c3719\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c1b7214ed218196f52d50450e5c8d1f61c38a167319bf2c1acea4cb70b3b7602\"" May 17 00:24:02.736082 containerd[1490]: time="2025-05-17T00:24:02.735903440Z" level=info msg="StartContainer for \"c1b7214ed218196f52d50450e5c8d1f61c38a167319bf2c1acea4cb70b3b7602\"" May 17 00:24:02.778719 systemd[1]: Started cri-containerd-fdabb785c0791603178dd744bddaafa3b10abfcd5e71736ccfa02ee689778cb0.scope - libcontainer container fdabb785c0791603178dd744bddaafa3b10abfcd5e71736ccfa02ee689778cb0. May 17 00:24:02.787783 systemd[1]: Started cri-containerd-c1b7214ed218196f52d50450e5c8d1f61c38a167319bf2c1acea4cb70b3b7602.scope - libcontainer container c1b7214ed218196f52d50450e5c8d1f61c38a167319bf2c1acea4cb70b3b7602. May 17 00:24:02.821210 containerd[1490]: time="2025-05-17T00:24:02.821139260Z" level=info msg="StartContainer for \"c1b7214ed218196f52d50450e5c8d1f61c38a167319bf2c1acea4cb70b3b7602\" returns successfully" May 17 00:24:02.821328 containerd[1490]: time="2025-05-17T00:24:02.821168785Z" level=info msg="StartContainer for \"fdabb785c0791603178dd744bddaafa3b10abfcd5e71736ccfa02ee689778cb0\" returns successfully" May 17 00:24:03.501985 kubelet[2705]: I0517 00:24:03.498226 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-htnp4" podStartSLOduration=18.498198847 podStartE2EDuration="18.498198847s" podCreationTimestamp="2025-05-17 00:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:03.494636455 +0000 UTC m=+25.313227539" watchObservedRunningTime="2025-05-17 00:24:03.498198847 +0000 UTC m=+25.316789891" May 17 00:24:03.551734 kubelet[2705]: I0517 00:24:03.551664 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tcjjf" podStartSLOduration=18.551640571 podStartE2EDuration="18.551640571s" podCreationTimestamp="2025-05-17 00:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:24:03.549655568 +0000 UTC m=+25.368246581" watchObservedRunningTime="2025-05-17 00:24:03.551640571 +0000 UTC m=+25.370231604" May 17 00:26:23.816225 update_engine[1475]: I20250517 00:26:23.816131 1475 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 00:26:23.816225 update_engine[1475]: I20250517 00:26:23.816217 1475 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 00:26:23.822973 update_engine[1475]: I20250517 00:26:23.822921 1475 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 00:26:23.824063 update_engine[1475]: I20250517 00:26:23.823993 1475 omaha_request_params.cc:62] Current group set to lts May 17 00:26:23.824926 update_engine[1475]: I20250517 00:26:23.824342 1475 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 00:26:23.824926 update_engine[1475]: I20250517 00:26:23.824370 1475 update_attempter.cc:643] Scheduling an action processor start. May 17 00:26:23.824926 update_engine[1475]: I20250517 00:26:23.824404 1475 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:26:23.824926 update_engine[1475]: I20250517 00:26:23.824481 1475 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 00:26:23.824926 update_engine[1475]: I20250517 00:26:23.824630 1475 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:26:23.824926 update_engine[1475]: I20250517 00:26:23.824651 1475 omaha_request_action.cc:272] Request: May 17 00:26:23.824926 update_engine[1475]: May 17 00:26:23.824926 update_engine[1475]: May 17 00:26:23.824926 update_engine[1475]: May 17 00:26:23.824926 update_engine[1475]: May 17 00:26:23.824926 update_engine[1475]: May 17 00:26:23.824926 update_engine[1475]: May 17 00:26:23.824926 update_engine[1475]: May 17 00:26:23.824926 update_engine[1475]: May 17 00:26:23.824926 update_engine[1475]: I20250517 00:26:23.824666 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:26:23.844031 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 00:26:23.844428 update_engine[1475]: I20250517 00:26:23.844084 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:26:23.844651 update_engine[1475]: I20250517 00:26:23.844533 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:26:23.845826 update_engine[1475]: E20250517 00:26:23.845777 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:26:23.845923 update_engine[1475]: I20250517 00:26:23.845885 1475 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 00:26:33.696388 update_engine[1475]: I20250517 00:26:33.696264 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:26:33.696925 update_engine[1475]: I20250517 00:26:33.696674 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:26:33.697028 update_engine[1475]: I20250517 00:26:33.696985 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:26:33.698072 update_engine[1475]: E20250517 00:26:33.697999 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:26:33.698232 update_engine[1475]: I20250517 00:26:33.698087 1475 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 00:26:43.697242 update_engine[1475]: I20250517 00:26:43.697125 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:26:43.697830 update_engine[1475]: I20250517 00:26:43.697578 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:26:43.698113 update_engine[1475]: I20250517 00:26:43.698019 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:26:43.699028 update_engine[1475]: E20250517 00:26:43.698973 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:26:43.699095 update_engine[1475]: I20250517 00:26:43.699070 1475 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 00:26:53.698301 update_engine[1475]: I20250517 00:26:53.698056 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:26:53.699418 update_engine[1475]: I20250517 00:26:53.698474 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:26:53.699418 update_engine[1475]: I20250517 00:26:53.698839 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:26:53.699761 update_engine[1475]: E20250517 00:26:53.699693 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:26:53.699856 update_engine[1475]: I20250517 00:26:53.699776 1475 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:26:53.699856 update_engine[1475]: I20250517 00:26:53.699790 1475 omaha_request_action.cc:617] Omaha request response: May 17 00:26:53.699975 update_engine[1475]: E20250517 00:26:53.699895 1475 omaha_request_action.cc:636] Omaha request network transfer failed. May 17 00:26:53.702685 update_engine[1475]: I20250517 00:26:53.701809 1475 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 00:26:53.702685 update_engine[1475]: I20250517 00:26:53.701843 1475 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:26:53.702685 update_engine[1475]: I20250517 00:26:53.701853 1475 update_attempter.cc:306] Processing Done. May 17 00:26:53.702685 update_engine[1475]: E20250517 00:26:53.701871 1475 update_attempter.cc:619] Update failed. May 17 00:26:53.702685 update_engine[1475]: I20250517 00:26:53.701880 1475 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 00:26:53.702685 update_engine[1475]: I20250517 00:26:53.701889 1475 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 00:26:53.702685 update_engine[1475]: I20250517 00:26:53.701898 1475 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 00:26:53.702685 update_engine[1475]: I20250517 00:26:53.702008 1475 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 00:26:53.702685 update_engine[1475]: I20250517 00:26:53.702041 1475 omaha_request_action.cc:271] Posting an Omaha request to disabled May 17 00:26:53.702685 update_engine[1475]: I20250517 00:26:53.702050 1475 omaha_request_action.cc:272] Request: May 17 00:26:53.702685 update_engine[1475]: May 17 00:26:53.702685 update_engine[1475]: May 17 00:26:53.702685 update_engine[1475]: May 17 00:26:53.702685 update_engine[1475]: May 17 00:26:53.702685 update_engine[1475]: May 17 00:26:53.702685 update_engine[1475]: May 17 00:26:53.702685 update_engine[1475]: I20250517 00:26:53.702060 1475 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 00:26:53.702685 update_engine[1475]: I20250517 00:26:53.702265 1475 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 00:26:53.704613 update_engine[1475]: I20250517 00:26:53.702548 1475 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. May 17 00:26:53.704613 update_engine[1475]: E20250517 00:26:53.703646 1475 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 00:26:53.704613 update_engine[1475]: I20250517 00:26:53.703712 1475 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 00:26:53.704613 update_engine[1475]: I20250517 00:26:53.703725 1475 omaha_request_action.cc:617] Omaha request response: May 17 00:26:53.704613 update_engine[1475]: I20250517 00:26:53.703735 1475 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:26:53.704613 update_engine[1475]: I20250517 00:26:53.703744 1475 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 00:26:53.704613 update_engine[1475]: I20250517 00:26:53.703752 1475 update_attempter.cc:306] Processing Done. May 17 00:26:53.704613 update_engine[1475]: I20250517 00:26:53.703761 1475 update_attempter.cc:310] Error event sent. May 17 00:26:53.704613 update_engine[1475]: I20250517 00:26:53.703775 1475 update_check_scheduler.cc:74] Next update check in 47m47s May 17 00:26:53.705073 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 00:26:53.705073 locksmithd[1513]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 00:28:15.024992 systemd[1]: Started sshd@7-37.27.213.195:22-139.178.89.65:40988.service - OpenSSH per-connection server daemon (139.178.89.65:40988). May 17 00:28:16.023058 sshd[4106]: Accepted publickey for core from 139.178.89.65 port 40988 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:16.025896 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:16.032514 systemd-logind[1474]: New session 8 of user core. May 17 00:28:16.038825 systemd[1]: Started session-8.scope - Session 8 of User core. May 17 00:28:17.360947 sshd[4106]: pam_unix(sshd:session): session closed for user core May 17 00:28:17.365941 systemd[1]: sshd@7-37.27.213.195:22-139.178.89.65:40988.service: Deactivated successfully. May 17 00:28:17.369231 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:28:17.370552 systemd-logind[1474]: Session 8 logged out. Waiting for processes to exit. May 17 00:28:17.372137 systemd-logind[1474]: Removed session 8. May 17 00:28:22.539091 systemd[1]: Started sshd@8-37.27.213.195:22-139.178.89.65:46186.service - OpenSSH per-connection server daemon (139.178.89.65:46186). May 17 00:28:23.532988 sshd[4122]: Accepted publickey for core from 139.178.89.65 port 46186 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:23.535135 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:23.541877 systemd-logind[1474]: New session 9 of user core. May 17 00:28:23.550885 systemd[1]: Started session-9.scope - Session 9 of User core. May 17 00:28:24.338483 sshd[4122]: pam_unix(sshd:session): session closed for user core May 17 00:28:24.343440 systemd[1]: sshd@8-37.27.213.195:22-139.178.89.65:46186.service: Deactivated successfully. May 17 00:28:24.346508 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:28:24.347539 systemd-logind[1474]: Session 9 logged out. Waiting for processes to exit. May 17 00:28:24.349074 systemd-logind[1474]: Removed session 9. May 17 00:28:29.516062 systemd[1]: Started sshd@9-37.27.213.195:22-139.178.89.65:48634.service - OpenSSH per-connection server daemon (139.178.89.65:48634). May 17 00:28:30.506511 sshd[4136]: Accepted publickey for core from 139.178.89.65 port 48634 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:30.508645 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:30.514685 systemd-logind[1474]: New session 10 of user core. May 17 00:28:30.519859 systemd[1]: Started session-10.scope - Session 10 of User core. May 17 00:28:31.267264 sshd[4136]: pam_unix(sshd:session): session closed for user core May 17 00:28:31.273233 systemd-logind[1474]: Session 10 logged out. Waiting for processes to exit. May 17 00:28:31.274056 systemd[1]: sshd@9-37.27.213.195:22-139.178.89.65:48634.service: Deactivated successfully. May 17 00:28:31.276536 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:28:31.279044 systemd-logind[1474]: Removed session 10. May 17 00:28:31.444128 systemd[1]: Started sshd@10-37.27.213.195:22-139.178.89.65:48650.service - OpenSSH per-connection server daemon (139.178.89.65:48650). May 17 00:28:32.454145 sshd[4150]: Accepted publickey for core from 139.178.89.65 port 48650 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:32.456366 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:32.463268 systemd-logind[1474]: New session 11 of user core. May 17 00:28:32.470836 systemd[1]: Started session-11.scope - Session 11 of User core. May 17 00:28:33.332054 sshd[4150]: pam_unix(sshd:session): session closed for user core May 17 00:28:33.341901 systemd[1]: sshd@10-37.27.213.195:22-139.178.89.65:48650.service: Deactivated successfully. May 17 00:28:33.345820 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:28:33.347279 systemd-logind[1474]: Session 11 logged out. Waiting for processes to exit. May 17 00:28:33.349252 systemd-logind[1474]: Removed session 11. May 17 00:28:33.511232 systemd[1]: Started sshd@11-37.27.213.195:22-139.178.89.65:48652.service - OpenSSH per-connection server daemon (139.178.89.65:48652). May 17 00:28:34.496485 sshd[4162]: Accepted publickey for core from 139.178.89.65 port 48652 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:34.499677 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:34.509249 systemd-logind[1474]: New session 12 of user core. May 17 00:28:34.513986 systemd[1]: Started session-12.scope - Session 12 of User core. May 17 00:28:35.274912 sshd[4162]: pam_unix(sshd:session): session closed for user core May 17 00:28:35.279153 systemd[1]: sshd@11-37.27.213.195:22-139.178.89.65:48652.service: Deactivated successfully. May 17 00:28:35.282172 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:28:35.284433 systemd-logind[1474]: Session 12 logged out. Waiting for processes to exit. May 17 00:28:35.286283 systemd-logind[1474]: Removed session 12. May 17 00:28:40.445429 systemd[1]: Started sshd@12-37.27.213.195:22-139.178.89.65:34136.service - OpenSSH per-connection server daemon (139.178.89.65:34136). May 17 00:28:41.429568 sshd[4177]: Accepted publickey for core from 139.178.89.65 port 34136 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:41.431681 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:41.438705 systemd-logind[1474]: New session 13 of user core. May 17 00:28:41.448897 systemd[1]: Started session-13.scope - Session 13 of User core. May 17 00:28:42.197503 sshd[4177]: pam_unix(sshd:session): session closed for user core May 17 00:28:42.207078 systemd[1]: sshd@12-37.27.213.195:22-139.178.89.65:34136.service: Deactivated successfully. May 17 00:28:42.210527 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:28:42.212192 systemd-logind[1474]: Session 13 logged out. Waiting for processes to exit. May 17 00:28:42.214023 systemd-logind[1474]: Removed session 13. May 17 00:28:42.377112 systemd[1]: Started sshd@13-37.27.213.195:22-139.178.89.65:34148.service - OpenSSH per-connection server daemon (139.178.89.65:34148). May 17 00:28:43.374255 sshd[4191]: Accepted publickey for core from 139.178.89.65 port 34148 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:43.376871 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:43.385468 systemd-logind[1474]: New session 14 of user core. May 17 00:28:43.391919 systemd[1]: Started session-14.scope - Session 14 of User core. May 17 00:28:44.362035 sshd[4191]: pam_unix(sshd:session): session closed for user core May 17 00:28:44.372352 systemd[1]: sshd@13-37.27.213.195:22-139.178.89.65:34148.service: Deactivated successfully. May 17 00:28:44.375977 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:28:44.377835 systemd-logind[1474]: Session 14 logged out. Waiting for processes to exit. May 17 00:28:44.379720 systemd-logind[1474]: Removed session 14. May 17 00:28:44.537965 systemd[1]: Started sshd@14-37.27.213.195:22-139.178.89.65:34156.service - OpenSSH per-connection server daemon (139.178.89.65:34156). May 17 00:28:45.535102 sshd[4202]: Accepted publickey for core from 139.178.89.65 port 34156 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:45.537302 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:45.545132 systemd-logind[1474]: New session 15 of user core. May 17 00:28:45.552949 systemd[1]: Started session-15.scope - Session 15 of User core. May 17 00:28:47.385899 sshd[4202]: pam_unix(sshd:session): session closed for user core May 17 00:28:47.390671 systemd-logind[1474]: Session 15 logged out. Waiting for processes to exit. May 17 00:28:47.392185 systemd[1]: sshd@14-37.27.213.195:22-139.178.89.65:34156.service: Deactivated successfully. May 17 00:28:47.395954 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:28:47.400072 systemd-logind[1474]: Removed session 15. May 17 00:28:47.555017 systemd[1]: Started sshd@15-37.27.213.195:22-139.178.89.65:53290.service - OpenSSH per-connection server daemon (139.178.89.65:53290). May 17 00:28:48.555944 sshd[4222]: Accepted publickey for core from 139.178.89.65 port 53290 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:48.558895 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:48.565544 systemd-logind[1474]: New session 16 of user core. May 17 00:28:48.571821 systemd[1]: Started session-16.scope - Session 16 of User core. May 17 00:28:49.507739 sshd[4222]: pam_unix(sshd:session): session closed for user core May 17 00:28:49.512241 systemd[1]: sshd@15-37.27.213.195:22-139.178.89.65:53290.service: Deactivated successfully. May 17 00:28:49.515057 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:28:49.516881 systemd-logind[1474]: Session 16 logged out. Waiting for processes to exit. May 17 00:28:49.519014 systemd-logind[1474]: Removed session 16. May 17 00:28:49.680945 systemd[1]: Started sshd@16-37.27.213.195:22-139.178.89.65:53296.service - OpenSSH per-connection server daemon (139.178.89.65:53296). May 17 00:28:50.654977 sshd[4233]: Accepted publickey for core from 139.178.89.65 port 53296 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:50.657132 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:50.664161 systemd-logind[1474]: New session 17 of user core. May 17 00:28:50.673774 systemd[1]: Started session-17.scope - Session 17 of User core. May 17 00:28:51.442131 sshd[4233]: pam_unix(sshd:session): session closed for user core May 17 00:28:51.448777 systemd[1]: sshd@16-37.27.213.195:22-139.178.89.65:53296.service: Deactivated successfully. May 17 00:28:51.452182 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:28:51.453430 systemd-logind[1474]: Session 17 logged out. Waiting for processes to exit. May 17 00:28:51.455093 systemd-logind[1474]: Removed session 17. May 17 00:28:56.620996 systemd[1]: Started sshd@17-37.27.213.195:22-139.178.89.65:53298.service - OpenSSH per-connection server daemon (139.178.89.65:53298). May 17 00:28:57.596311 sshd[4249]: Accepted publickey for core from 139.178.89.65 port 53298 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:28:57.598882 sshd[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:28:57.606706 systemd-logind[1474]: New session 18 of user core. May 17 00:28:57.612897 systemd[1]: Started session-18.scope - Session 18 of User core. May 17 00:28:58.366936 sshd[4249]: pam_unix(sshd:session): session closed for user core May 17 00:28:58.372757 systemd[1]: sshd@17-37.27.213.195:22-139.178.89.65:53298.service: Deactivated successfully. May 17 00:28:58.376804 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:28:58.378257 systemd-logind[1474]: Session 18 logged out. Waiting for processes to exit. May 17 00:28:58.380480 systemd-logind[1474]: Removed session 18. May 17 00:29:03.538904 systemd[1]: Started sshd@18-37.27.213.195:22-139.178.89.65:39298.service - OpenSSH per-connection server daemon (139.178.89.65:39298). May 17 00:29:04.523438 sshd[4263]: Accepted publickey for core from 139.178.89.65 port 39298 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:29:04.525803 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:04.534007 systemd-logind[1474]: New session 19 of user core. May 17 00:29:04.538781 systemd[1]: Started session-19.scope - Session 19 of User core. May 17 00:29:05.293059 sshd[4263]: pam_unix(sshd:session): session closed for user core May 17 00:29:05.296329 systemd[1]: sshd@18-37.27.213.195:22-139.178.89.65:39298.service: Deactivated successfully. May 17 00:29:05.298229 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:29:05.299780 systemd-logind[1474]: Session 19 logged out. Waiting for processes to exit. May 17 00:29:05.301564 systemd-logind[1474]: Removed session 19. May 17 00:29:05.465482 systemd[1]: Started sshd@19-37.27.213.195:22-139.178.89.65:39308.service - OpenSSH per-connection server daemon (139.178.89.65:39308). May 17 00:29:06.449517 sshd[4276]: Accepted publickey for core from 139.178.89.65 port 39308 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:29:06.452297 sshd[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:06.460967 systemd-logind[1474]: New session 20 of user core. May 17 00:29:06.469605 systemd[1]: Started session-20.scope - Session 20 of User core. May 17 00:29:08.433472 systemd[1]: run-containerd-runc-k8s.io-d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2-runc.hBQguY.mount: Deactivated successfully. May 17 00:29:08.464063 containerd[1490]: time="2025-05-17T00:29:08.463681234Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:29:08.464063 containerd[1490]: time="2025-05-17T00:29:08.464002436Z" level=info msg="StopContainer for \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\" with timeout 30 (s)" May 17 00:29:08.466157 containerd[1490]: time="2025-05-17T00:29:08.466046950Z" level=info msg="Stop container \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\" with signal terminated" May 17 00:29:08.474291 containerd[1490]: time="2025-05-17T00:29:08.474270181Z" level=info msg="StopContainer for \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\" with timeout 2 (s)" May 17 00:29:08.474979 containerd[1490]: time="2025-05-17T00:29:08.474808501Z" level=info msg="Stop container \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\" with signal terminated" May 17 00:29:08.478367 systemd[1]: cri-containerd-b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416.scope: Deactivated successfully. May 17 00:29:08.490517 kubelet[2705]: E0517 00:29:08.476775 2705 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:29:08.493182 systemd-networkd[1395]: lxc_health: Link DOWN May 17 00:29:08.493187 systemd-networkd[1395]: lxc_health: Lost carrier May 17 00:29:08.512469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416-rootfs.mount: Deactivated successfully. May 17 00:29:08.517503 systemd[1]: cri-containerd-d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2.scope: Deactivated successfully. May 17 00:29:08.517736 systemd[1]: cri-containerd-d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2.scope: Consumed 8.094s CPU time. May 17 00:29:08.527478 containerd[1490]: time="2025-05-17T00:29:08.527342882Z" level=info msg="shim disconnected" id=b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416 namespace=k8s.io May 17 00:29:08.527478 containerd[1490]: time="2025-05-17T00:29:08.527430235Z" level=warning msg="cleaning up after shim disconnected" id=b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416 namespace=k8s.io May 17 00:29:08.527478 containerd[1490]: time="2025-05-17T00:29:08.527441286Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:29:08.545255 containerd[1490]: time="2025-05-17T00:29:08.545132473Z" level=info msg="StopContainer for \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\" returns successfully" May 17 00:29:08.546982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2-rootfs.mount: Deactivated successfully. May 17 00:29:08.555678 containerd[1490]: time="2025-05-17T00:29:08.555635169Z" level=info msg="StopPodSandbox for \"2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466\"" May 17 00:29:08.557727 containerd[1490]: time="2025-05-17T00:29:08.555692086Z" level=info msg="Container to stop \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:29:08.557727 containerd[1490]: time="2025-05-17T00:29:08.556632109Z" level=info msg="shim disconnected" id=d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2 namespace=k8s.io May 17 00:29:08.557727 containerd[1490]: time="2025-05-17T00:29:08.556663959Z" level=warning msg="cleaning up after shim disconnected" id=d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2 namespace=k8s.io May 17 00:29:08.557727 containerd[1490]: time="2025-05-17T00:29:08.556678677Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:29:08.557487 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466-shm.mount: Deactivated successfully. May 17 00:29:08.565314 systemd[1]: cri-containerd-2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466.scope: Deactivated successfully. May 17 00:29:08.575212 containerd[1490]: time="2025-05-17T00:29:08.574800711Z" level=info msg="StopContainer for \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\" returns successfully" May 17 00:29:08.575351 containerd[1490]: time="2025-05-17T00:29:08.575322349Z" level=info msg="StopPodSandbox for \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\"" May 17 00:29:08.575386 containerd[1490]: time="2025-05-17T00:29:08.575366953Z" level=info msg="Container to stop \"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:29:08.575386 containerd[1490]: time="2025-05-17T00:29:08.575379768Z" level=info msg="Container to stop \"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:29:08.575452 containerd[1490]: time="2025-05-17T00:29:08.575389375Z" level=info msg="Container to stop \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:29:08.575452 containerd[1490]: time="2025-05-17T00:29:08.575400346Z" level=info msg="Container to stop \"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:29:08.575452 containerd[1490]: time="2025-05-17T00:29:08.575409653Z" level=info msg="Container to stop \"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 17 00:29:08.581810 systemd[1]: cri-containerd-650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf.scope: Deactivated successfully. May 17 00:29:08.600371 containerd[1490]: time="2025-05-17T00:29:08.600191238Z" level=info msg="shim disconnected" id=2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466 namespace=k8s.io May 17 00:29:08.600371 containerd[1490]: time="2025-05-17T00:29:08.600243966Z" level=warning msg="cleaning up after shim disconnected" id=2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466 namespace=k8s.io May 17 00:29:08.600371 containerd[1490]: time="2025-05-17T00:29:08.600251140Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:29:08.600824 containerd[1490]: time="2025-05-17T00:29:08.600697107Z" level=info msg="shim disconnected" id=650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf namespace=k8s.io May 17 00:29:08.600824 containerd[1490]: time="2025-05-17T00:29:08.600740969Z" level=warning msg="cleaning up after shim disconnected" id=650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf namespace=k8s.io May 17 00:29:08.600824 containerd[1490]: time="2025-05-17T00:29:08.600749565Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:29:08.612976 containerd[1490]: time="2025-05-17T00:29:08.612910641Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:29:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 17 00:29:08.623526 containerd[1490]: time="2025-05-17T00:29:08.623309072Z" level=info msg="TearDown network for sandbox \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" successfully" May 17 00:29:08.623526 containerd[1490]: time="2025-05-17T00:29:08.623356841Z" level=info msg="StopPodSandbox for \"650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf\" returns successfully" May 17 00:29:08.624669 containerd[1490]: time="2025-05-17T00:29:08.624622876Z" level=info msg="TearDown network for sandbox \"2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466\" successfully" May 17 00:29:08.624669 containerd[1490]: time="2025-05-17T00:29:08.624658783Z" level=info msg="StopPodSandbox for \"2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466\" returns successfully" May 17 00:29:08.702659 kubelet[2705]: I0517 00:29:08.701842 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-bpf-maps\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.702659 kubelet[2705]: I0517 00:29:08.701891 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-hostproc\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.702659 kubelet[2705]: I0517 00:29:08.701904 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-lib-modules\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.702659 kubelet[2705]: I0517 00:29:08.701916 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cni-path\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.702659 kubelet[2705]: I0517 00:29:08.701928 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-cgroup\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.702659 kubelet[2705]: I0517 00:29:08.701942 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-host-proc-sys-net\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.702891 kubelet[2705]: I0517 00:29:08.701955 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-xtables-lock\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.702891 kubelet[2705]: I0517 00:29:08.701977 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5cxq\" (UniqueName: \"kubernetes.io/projected/3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b-kube-api-access-p5cxq\") pod \"3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b\" (UID: \"3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b\") " May 17 00:29:08.702891 kubelet[2705]: I0517 00:29:08.702006 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-config-path\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.702891 kubelet[2705]: I0517 00:29:08.702018 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-host-proc-sys-kernel\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.702891 kubelet[2705]: I0517 00:29:08.702036 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acc1aca0-af82-4917-a4bb-9afb519fff17-clustermesh-secrets\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.702891 kubelet[2705]: I0517 00:29:08.702055 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b-cilium-config-path\") pod \"3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b\" (UID: \"3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b\") " May 17 00:29:08.703005 kubelet[2705]: I0517 00:29:08.702071 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acc1aca0-af82-4917-a4bb-9afb519fff17-hubble-tls\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.703005 kubelet[2705]: I0517 00:29:08.702083 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-etc-cni-netd\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.703005 kubelet[2705]: I0517 00:29:08.702098 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25jnm\" (UniqueName: \"kubernetes.io/projected/acc1aca0-af82-4917-a4bb-9afb519fff17-kube-api-access-25jnm\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.703005 kubelet[2705]: I0517 00:29:08.702110 2705 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-run\") pod \"acc1aca0-af82-4917-a4bb-9afb519fff17\" (UID: \"acc1aca0-af82-4917-a4bb-9afb519fff17\") " May 17 00:29:08.708220 kubelet[2705]: I0517 00:29:08.705921 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:29:08.720842 kubelet[2705]: I0517 00:29:08.720042 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:29:08.720842 kubelet[2705]: I0517 00:29:08.720122 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:29:08.728252 kubelet[2705]: I0517 00:29:08.723745 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:29:08.728252 kubelet[2705]: I0517 00:29:08.723981 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-hostproc" (OuterVolumeSpecName: "hostproc") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:29:08.728252 kubelet[2705]: I0517 00:29:08.724177 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:29:08.728252 kubelet[2705]: I0517 00:29:08.724199 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cni-path" (OuterVolumeSpecName: "cni-path") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:29:08.728252 kubelet[2705]: I0517 00:29:08.724212 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:29:08.728405 kubelet[2705]: I0517 00:29:08.724223 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:29:08.728405 kubelet[2705]: I0517 00:29:08.724234 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:29:08.728405 kubelet[2705]: I0517 00:29:08.725384 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b" (UID: "3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 17 00:29:08.728519 kubelet[2705]: I0517 00:29:08.728504 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 17 00:29:08.732907 kubelet[2705]: I0517 00:29:08.732835 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b-kube-api-access-p5cxq" (OuterVolumeSpecName: "kube-api-access-p5cxq") pod "3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b" (UID: "3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b"). InnerVolumeSpecName "kube-api-access-p5cxq". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:29:08.733954 kubelet[2705]: I0517 00:29:08.733911 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc1aca0-af82-4917-a4bb-9afb519fff17-kube-api-access-25jnm" (OuterVolumeSpecName: "kube-api-access-25jnm") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "kube-api-access-25jnm". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:29:08.734127 kubelet[2705]: I0517 00:29:08.734086 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/acc1aca0-af82-4917-a4bb-9afb519fff17-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 17 00:29:08.734403 kubelet[2705]: I0517 00:29:08.734378 2705 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acc1aca0-af82-4917-a4bb-9afb519fff17-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "acc1aca0-af82-4917-a4bb-9afb519fff17" (UID: "acc1aca0-af82-4917-a4bb-9afb519fff17"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 17 00:29:08.803158 kubelet[2705]: I0517 00:29:08.803078 2705 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-25jnm\" (UniqueName: \"kubernetes.io/projected/acc1aca0-af82-4917-a4bb-9afb519fff17-kube-api-access-25jnm\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803158 kubelet[2705]: I0517 00:29:08.803124 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-run\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803158 kubelet[2705]: I0517 00:29:08.803137 2705 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-bpf-maps\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803158 kubelet[2705]: I0517 00:29:08.803147 2705 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-hostproc\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803158 kubelet[2705]: I0517 00:29:08.803159 2705 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-lib-modules\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803158 kubelet[2705]: I0517 00:29:08.803170 2705 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cni-path\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803158 kubelet[2705]: I0517 00:29:08.803180 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-cgroup\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803650 kubelet[2705]: I0517 00:29:08.803191 2705 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-host-proc-sys-net\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803650 kubelet[2705]: I0517 00:29:08.803203 2705 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-xtables-lock\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803650 kubelet[2705]: I0517 00:29:08.803216 2705 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p5cxq\" (UniqueName: \"kubernetes.io/projected/3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b-kube-api-access-p5cxq\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803650 kubelet[2705]: I0517 00:29:08.803227 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/acc1aca0-af82-4917-a4bb-9afb519fff17-cilium-config-path\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803650 kubelet[2705]: I0517 00:29:08.803238 2705 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-host-proc-sys-kernel\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803650 kubelet[2705]: I0517 00:29:08.803249 2705 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/acc1aca0-af82-4917-a4bb-9afb519fff17-clustermesh-secrets\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803650 kubelet[2705]: I0517 00:29:08.803260 2705 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b-cilium-config-path\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.803650 kubelet[2705]: I0517 00:29:08.803271 2705 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/acc1aca0-af82-4917-a4bb-9afb519fff17-etc-cni-netd\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:08.804005 kubelet[2705]: I0517 00:29:08.803282 2705 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/acc1aca0-af82-4917-a4bb-9afb519fff17-hubble-tls\") on node \"ci-4081-3-3-n-decaff31fa\" DevicePath \"\"" May 17 00:29:09.289764 systemd[1]: Removed slice kubepods-burstable-podacc1aca0_af82_4917_a4bb_9afb519fff17.slice - libcontainer container kubepods-burstable-podacc1aca0_af82_4917_a4bb_9afb519fff17.slice. May 17 00:29:09.290032 systemd[1]: kubepods-burstable-podacc1aca0_af82_4917_a4bb_9afb519fff17.slice: Consumed 8.172s CPU time. May 17 00:29:09.304311 kubelet[2705]: I0517 00:29:09.304239 2705 scope.go:117] "RemoveContainer" containerID="d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2" May 17 00:29:09.310661 containerd[1490]: time="2025-05-17T00:29:09.309312128Z" level=info msg="RemoveContainer for \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\"" May 17 00:29:09.316760 systemd[1]: Removed slice kubepods-besteffort-pod3b6cd7cf_7db6_4dc1_bfba_2e9d1126b65b.slice - libcontainer container kubepods-besteffort-pod3b6cd7cf_7db6_4dc1_bfba_2e9d1126b65b.slice. May 17 00:29:09.321700 containerd[1490]: time="2025-05-17T00:29:09.320354377Z" level=info msg="RemoveContainer for \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\" returns successfully" May 17 00:29:09.329161 kubelet[2705]: I0517 00:29:09.328644 2705 scope.go:117] "RemoveContainer" containerID="2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c" May 17 00:29:09.332566 containerd[1490]: time="2025-05-17T00:29:09.332477782Z" level=info msg="RemoveContainer for \"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c\"" May 17 00:29:09.338171 containerd[1490]: time="2025-05-17T00:29:09.338124151Z" level=info msg="RemoveContainer for \"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c\" returns successfully" May 17 00:29:09.338726 kubelet[2705]: I0517 00:29:09.338660 2705 scope.go:117] "RemoveContainer" containerID="0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362" May 17 00:29:09.344133 containerd[1490]: time="2025-05-17T00:29:09.343877631Z" level=info msg="RemoveContainer for \"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362\"" May 17 00:29:09.351373 containerd[1490]: time="2025-05-17T00:29:09.351295242Z" level=info msg="RemoveContainer for \"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362\" returns successfully" May 17 00:29:09.351707 kubelet[2705]: I0517 00:29:09.351658 2705 scope.go:117] "RemoveContainer" containerID="bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0" May 17 00:29:09.353624 containerd[1490]: time="2025-05-17T00:29:09.353505416Z" level=info msg="RemoveContainer for \"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0\"" May 17 00:29:09.360111 containerd[1490]: time="2025-05-17T00:29:09.359948800Z" level=info msg="RemoveContainer for \"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0\" returns successfully" May 17 00:29:09.360640 kubelet[2705]: I0517 00:29:09.360416 2705 scope.go:117] "RemoveContainer" containerID="69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85" May 17 00:29:09.362001 containerd[1490]: time="2025-05-17T00:29:09.361897605Z" level=info msg="RemoveContainer for \"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85\"" May 17 00:29:09.366954 containerd[1490]: time="2025-05-17T00:29:09.366897982Z" level=info msg="RemoveContainer for \"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85\" returns successfully" May 17 00:29:09.367150 kubelet[2705]: I0517 00:29:09.367128 2705 scope.go:117] "RemoveContainer" containerID="d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2" May 17 00:29:09.386797 containerd[1490]: time="2025-05-17T00:29:09.374191220Z" level=error msg="ContainerStatus for \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\": not found" May 17 00:29:09.395466 kubelet[2705]: E0517 00:29:09.394002 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\": not found" containerID="d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2" May 17 00:29:09.395466 kubelet[2705]: I0517 00:29:09.394070 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2"} err="failed to get container status \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\": rpc error: code = NotFound desc = an error occurred when try to find container \"d54d50627b7f1f624238b83752e822d040de856dac35da9ac892bafb5843def2\": not found" May 17 00:29:09.395466 kubelet[2705]: I0517 00:29:09.394187 2705 scope.go:117] "RemoveContainer" containerID="2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c" May 17 00:29:09.395948 containerd[1490]: time="2025-05-17T00:29:09.395887758Z" level=error msg="ContainerStatus for \"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c\": not found" May 17 00:29:09.396073 kubelet[2705]: E0517 00:29:09.396046 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c\": not found" containerID="2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c" May 17 00:29:09.396115 kubelet[2705]: I0517 00:29:09.396076 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c"} err="failed to get container status \"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ffce4f8464834584a1b0415b0fee9a7d92866838edfbc81a5b83e2c3a52319c\": not found" May 17 00:29:09.396115 kubelet[2705]: I0517 00:29:09.396095 2705 scope.go:117] "RemoveContainer" containerID="0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362" May 17 00:29:09.396318 containerd[1490]: time="2025-05-17T00:29:09.396274564Z" level=error msg="ContainerStatus for \"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362\": not found" May 17 00:29:09.396428 kubelet[2705]: E0517 00:29:09.396399 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362\": not found" containerID="0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362" May 17 00:29:09.396473 kubelet[2705]: I0517 00:29:09.396440 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362"} err="failed to get container status \"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a557f4d85bc82decb93340862ecb5096f85d3346200356371be9546f64d4362\": not found" May 17 00:29:09.396473 kubelet[2705]: I0517 00:29:09.396459 2705 scope.go:117] "RemoveContainer" containerID="bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0" May 17 00:29:09.396665 containerd[1490]: time="2025-05-17T00:29:09.396627806Z" level=error msg="ContainerStatus for \"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0\": not found" May 17 00:29:09.396829 kubelet[2705]: E0517 00:29:09.396800 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0\": not found" containerID="bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0" May 17 00:29:09.396905 kubelet[2705]: I0517 00:29:09.396826 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0"} err="failed to get container status \"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc6ede408a466be68fefe6d9b6902e705b111d2200cca202b5462e30c2a9c4f0\": not found" May 17 00:29:09.396905 kubelet[2705]: I0517 00:29:09.396902 2705 scope.go:117] "RemoveContainer" containerID="69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85" May 17 00:29:09.397196 containerd[1490]: time="2025-05-17T00:29:09.397151900Z" level=error msg="ContainerStatus for \"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85\": not found" May 17 00:29:09.397613 kubelet[2705]: E0517 00:29:09.397336 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85\": not found" containerID="69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85" May 17 00:29:09.397613 kubelet[2705]: I0517 00:29:09.397361 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85"} err="failed to get container status \"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85\": rpc error: code = NotFound desc = an error occurred when try to find container \"69bfac5544fe45929d8666856c6caf6663a249ad559f9f6e8e30afaeb3671f85\": not found" May 17 00:29:09.397613 kubelet[2705]: I0517 00:29:09.397378 2705 scope.go:117] "RemoveContainer" containerID="b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416" May 17 00:29:09.398670 containerd[1490]: time="2025-05-17T00:29:09.398637274Z" level=info msg="RemoveContainer for \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\"" May 17 00:29:09.402276 containerd[1490]: time="2025-05-17T00:29:09.402223199Z" level=info msg="RemoveContainer for \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\" returns successfully" May 17 00:29:09.402377 kubelet[2705]: I0517 00:29:09.402357 2705 scope.go:117] "RemoveContainer" containerID="b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416" May 17 00:29:09.402565 containerd[1490]: time="2025-05-17T00:29:09.402537889Z" level=error msg="ContainerStatus for \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\": not found" May 17 00:29:09.402700 kubelet[2705]: E0517 00:29:09.402664 2705 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\": not found" containerID="b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416" May 17 00:29:09.402700 kubelet[2705]: I0517 00:29:09.402689 2705 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416"} err="failed to get container status \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4041ab1d56aaf10be178678609de2951a1c62bb5a4a115d1a48767c9948d416\": not found" May 17 00:29:09.426624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a3deeb99139204bb68242344eb471791b3ba270a56642d7fcf3f12cd2698466-rootfs.mount: Deactivated successfully. May 17 00:29:09.426756 systemd[1]: var-lib-kubelet-pods-3b6cd7cf\x2d7db6\x2d4dc1\x2dbfba\x2d2e9d1126b65b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp5cxq.mount: Deactivated successfully. May 17 00:29:09.426844 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf-rootfs.mount: Deactivated successfully. May 17 00:29:09.426919 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-650e740ea98a98efa753f793b90816cd6e8f54b91b60b78fc0ea692805171daf-shm.mount: Deactivated successfully. May 17 00:29:09.427008 systemd[1]: var-lib-kubelet-pods-acc1aca0\x2daf82\x2d4917\x2da4bb\x2d9afb519fff17-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d25jnm.mount: Deactivated successfully. May 17 00:29:09.427087 systemd[1]: var-lib-kubelet-pods-acc1aca0\x2daf82\x2d4917\x2da4bb\x2d9afb519fff17-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 17 00:29:09.427158 systemd[1]: var-lib-kubelet-pods-acc1aca0\x2daf82\x2d4917\x2da4bb\x2d9afb519fff17-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 17 00:29:10.312067 kubelet[2705]: I0517 00:29:10.312005 2705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b" path="/var/lib/kubelet/pods/3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b/volumes" May 17 00:29:10.313080 kubelet[2705]: I0517 00:29:10.312990 2705 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acc1aca0-af82-4917-a4bb-9afb519fff17" path="/var/lib/kubelet/pods/acc1aca0-af82-4917-a4bb-9afb519fff17/volumes" May 17 00:29:10.481849 sshd[4276]: pam_unix(sshd:session): session closed for user core May 17 00:29:10.486034 systemd[1]: sshd@19-37.27.213.195:22-139.178.89.65:39308.service: Deactivated successfully. May 17 00:29:10.488926 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:29:10.491035 systemd-logind[1474]: Session 20 logged out. Waiting for processes to exit. May 17 00:29:10.492920 systemd-logind[1474]: Removed session 20. May 17 00:29:10.654997 systemd[1]: Started sshd@20-37.27.213.195:22-139.178.89.65:36966.service - OpenSSH per-connection server daemon (139.178.89.65:36966). May 17 00:29:11.650532 sshd[4438]: Accepted publickey for core from 139.178.89.65 port 36966 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:29:11.653624 sshd[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:11.662341 systemd-logind[1474]: New session 21 of user core. May 17 00:29:11.671926 systemd[1]: Started session-21.scope - Session 21 of User core. May 17 00:29:12.887547 kubelet[2705]: I0517 00:29:12.886655 2705 memory_manager.go:355] "RemoveStaleState removing state" podUID="acc1aca0-af82-4917-a4bb-9afb519fff17" containerName="cilium-agent" May 17 00:29:12.887547 kubelet[2705]: I0517 00:29:12.886709 2705 memory_manager.go:355] "RemoveStaleState removing state" podUID="3b6cd7cf-7db6-4dc1-bfba-2e9d1126b65b" containerName="cilium-operator" May 17 00:29:12.901900 systemd[1]: Created slice kubepods-burstable-pod2fc2af3b_bd97_4218_9cef_ec27d5dae052.slice - libcontainer container kubepods-burstable-pod2fc2af3b_bd97_4218_9cef_ec27d5dae052.slice. May 17 00:29:13.027738 kubelet[2705]: I0517 00:29:13.027617 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fc2af3b-bd97-4218-9cef-ec27d5dae052-cni-path\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.027738 kubelet[2705]: I0517 00:29:13.027703 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fc2af3b-bd97-4218-9cef-ec27d5dae052-host-proc-sys-kernel\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.027738 kubelet[2705]: I0517 00:29:13.027761 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w25l\" (UniqueName: \"kubernetes.io/projected/2fc2af3b-bd97-4218-9cef-ec27d5dae052-kube-api-access-2w25l\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028395 kubelet[2705]: I0517 00:29:13.027811 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fc2af3b-bd97-4218-9cef-ec27d5dae052-hostproc\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028395 kubelet[2705]: I0517 00:29:13.027838 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fc2af3b-bd97-4218-9cef-ec27d5dae052-bpf-maps\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028395 kubelet[2705]: I0517 00:29:13.027864 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fc2af3b-bd97-4218-9cef-ec27d5dae052-hubble-tls\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028395 kubelet[2705]: I0517 00:29:13.027893 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fc2af3b-bd97-4218-9cef-ec27d5dae052-host-proc-sys-net\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028395 kubelet[2705]: I0517 00:29:13.027925 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fc2af3b-bd97-4218-9cef-ec27d5dae052-cilium-cgroup\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028395 kubelet[2705]: I0517 00:29:13.027955 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fc2af3b-bd97-4218-9cef-ec27d5dae052-lib-modules\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028980 kubelet[2705]: I0517 00:29:13.027981 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2fc2af3b-bd97-4218-9cef-ec27d5dae052-cilium-ipsec-secrets\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028980 kubelet[2705]: I0517 00:29:13.028011 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fc2af3b-bd97-4218-9cef-ec27d5dae052-etc-cni-netd\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028980 kubelet[2705]: I0517 00:29:13.028039 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fc2af3b-bd97-4218-9cef-ec27d5dae052-cilium-config-path\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028980 kubelet[2705]: I0517 00:29:13.028065 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fc2af3b-bd97-4218-9cef-ec27d5dae052-cilium-run\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028980 kubelet[2705]: I0517 00:29:13.028091 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fc2af3b-bd97-4218-9cef-ec27d5dae052-xtables-lock\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.028980 kubelet[2705]: I0517 00:29:13.028121 2705 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fc2af3b-bd97-4218-9cef-ec27d5dae052-clustermesh-secrets\") pod \"cilium-x4x2q\" (UID: \"2fc2af3b-bd97-4218-9cef-ec27d5dae052\") " pod="kube-system/cilium-x4x2q" May 17 00:29:13.087924 sshd[4438]: pam_unix(sshd:session): session closed for user core May 17 00:29:13.096122 systemd-logind[1474]: Session 21 logged out. Waiting for processes to exit. May 17 00:29:13.097688 systemd[1]: sshd@20-37.27.213.195:22-139.178.89.65:36966.service: Deactivated successfully. May 17 00:29:13.102774 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:29:13.106701 systemd-logind[1474]: Removed session 21. May 17 00:29:13.229962 containerd[1490]: time="2025-05-17T00:29:13.229818091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x4x2q,Uid:2fc2af3b-bd97-4218-9cef-ec27d5dae052,Namespace:kube-system,Attempt:0,}" May 17 00:29:13.261005 systemd[1]: Started sshd@21-37.27.213.195:22-139.178.89.65:36980.service - OpenSSH per-connection server daemon (139.178.89.65:36980). May 17 00:29:13.267124 containerd[1490]: time="2025-05-17T00:29:13.266922025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:29:13.267296 containerd[1490]: time="2025-05-17T00:29:13.267093927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:29:13.267296 containerd[1490]: time="2025-05-17T00:29:13.267124013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:13.267296 containerd[1490]: time="2025-05-17T00:29:13.267236935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:29:13.287231 systemd[1]: Started cri-containerd-39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537.scope - libcontainer container 39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537. May 17 00:29:13.323379 containerd[1490]: time="2025-05-17T00:29:13.323307156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x4x2q,Uid:2fc2af3b-bd97-4218-9cef-ec27d5dae052,Namespace:kube-system,Attempt:0,} returns sandbox id \"39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537\"" May 17 00:29:13.327563 containerd[1490]: time="2025-05-17T00:29:13.327532190Z" level=info msg="CreateContainer within sandbox \"39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 17 00:29:13.339313 containerd[1490]: time="2025-05-17T00:29:13.339256247Z" level=info msg="CreateContainer within sandbox \"39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"668daeaa72ee02d5611ad838e99b61549d67066a240d42ed21667e19a91cef16\"" May 17 00:29:13.340000 containerd[1490]: time="2025-05-17T00:29:13.339814634Z" level=info msg="StartContainer for \"668daeaa72ee02d5611ad838e99b61549d67066a240d42ed21667e19a91cef16\"" May 17 00:29:13.363692 systemd[1]: Started cri-containerd-668daeaa72ee02d5611ad838e99b61549d67066a240d42ed21667e19a91cef16.scope - libcontainer container 668daeaa72ee02d5611ad838e99b61549d67066a240d42ed21667e19a91cef16. May 17 00:29:13.388574 containerd[1490]: time="2025-05-17T00:29:13.388529883Z" level=info msg="StartContainer for \"668daeaa72ee02d5611ad838e99b61549d67066a240d42ed21667e19a91cef16\" returns successfully" May 17 00:29:13.397138 systemd[1]: cri-containerd-668daeaa72ee02d5611ad838e99b61549d67066a240d42ed21667e19a91cef16.scope: Deactivated successfully. May 17 00:29:13.442524 containerd[1490]: time="2025-05-17T00:29:13.442451194Z" level=info msg="shim disconnected" id=668daeaa72ee02d5611ad838e99b61549d67066a240d42ed21667e19a91cef16 namespace=k8s.io May 17 00:29:13.442524 containerd[1490]: time="2025-05-17T00:29:13.442514433Z" level=warning msg="cleaning up after shim disconnected" id=668daeaa72ee02d5611ad838e99b61549d67066a240d42ed21667e19a91cef16 namespace=k8s.io May 17 00:29:13.442524 containerd[1490]: time="2025-05-17T00:29:13.442524872Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:29:13.491904 kubelet[2705]: E0517 00:29:13.491694 2705 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 17 00:29:14.244056 sshd[4467]: Accepted publickey for core from 139.178.89.65 port 36980 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:29:14.246000 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:14.251660 systemd-logind[1474]: New session 22 of user core. May 17 00:29:14.261744 systemd[1]: Started session-22.scope - Session 22 of User core. May 17 00:29:14.331257 containerd[1490]: time="2025-05-17T00:29:14.330744449Z" level=info msg="CreateContainer within sandbox \"39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 17 00:29:14.365068 containerd[1490]: time="2025-05-17T00:29:14.364995992Z" level=info msg="CreateContainer within sandbox \"39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fdc8917df411dba2b1533dc9731581ed20121e0377adf401654a72fd221ef0d9\"" May 17 00:29:14.372964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706983189.mount: Deactivated successfully. May 17 00:29:14.405617 containerd[1490]: time="2025-05-17T00:29:14.403025572Z" level=info msg="StartContainer for \"fdc8917df411dba2b1533dc9731581ed20121e0377adf401654a72fd221ef0d9\"" May 17 00:29:14.464429 systemd[1]: Started cri-containerd-fdc8917df411dba2b1533dc9731581ed20121e0377adf401654a72fd221ef0d9.scope - libcontainer container fdc8917df411dba2b1533dc9731581ed20121e0377adf401654a72fd221ef0d9. May 17 00:29:14.491572 containerd[1490]: time="2025-05-17T00:29:14.491528065Z" level=info msg="StartContainer for \"fdc8917df411dba2b1533dc9731581ed20121e0377adf401654a72fd221ef0d9\" returns successfully" May 17 00:29:14.498310 systemd[1]: cri-containerd-fdc8917df411dba2b1533dc9731581ed20121e0377adf401654a72fd221ef0d9.scope: Deactivated successfully. May 17 00:29:14.524386 containerd[1490]: time="2025-05-17T00:29:14.524276591Z" level=info msg="shim disconnected" id=fdc8917df411dba2b1533dc9731581ed20121e0377adf401654a72fd221ef0d9 namespace=k8s.io May 17 00:29:14.524386 containerd[1490]: time="2025-05-17T00:29:14.524378862Z" level=warning msg="cleaning up after shim disconnected" id=fdc8917df411dba2b1533dc9731581ed20121e0377adf401654a72fd221ef0d9 namespace=k8s.io May 17 00:29:14.524386 containerd[1490]: time="2025-05-17T00:29:14.524391226Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:29:14.914622 sshd[4467]: pam_unix(sshd:session): session closed for user core May 17 00:29:14.919336 systemd[1]: sshd@21-37.27.213.195:22-139.178.89.65:36980.service: Deactivated successfully. May 17 00:29:14.921541 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:29:14.922875 systemd-logind[1474]: Session 22 logged out. Waiting for processes to exit. May 17 00:29:14.924460 systemd-logind[1474]: Removed session 22. May 17 00:29:15.086466 systemd[1]: Started sshd@22-37.27.213.195:22-139.178.89.65:36990.service - OpenSSH per-connection server daemon (139.178.89.65:36990). May 17 00:29:15.137010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdc8917df411dba2b1533dc9731581ed20121e0377adf401654a72fd221ef0d9-rootfs.mount: Deactivated successfully. May 17 00:29:15.337670 containerd[1490]: time="2025-05-17T00:29:15.337398732Z" level=info msg="CreateContainer within sandbox \"39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 17 00:29:15.363126 containerd[1490]: time="2025-05-17T00:29:15.362778408Z" level=info msg="CreateContainer within sandbox \"39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d1e3ff7b9d18004f2a7c0c2176444c904947c2eb629d557ff33afb278974b127\"" May 17 00:29:15.366932 containerd[1490]: time="2025-05-17T00:29:15.366367920Z" level=info msg="StartContainer for \"d1e3ff7b9d18004f2a7c0c2176444c904947c2eb629d557ff33afb278974b127\"" May 17 00:29:15.424921 systemd[1]: Started cri-containerd-d1e3ff7b9d18004f2a7c0c2176444c904947c2eb629d557ff33afb278974b127.scope - libcontainer container d1e3ff7b9d18004f2a7c0c2176444c904947c2eb629d557ff33afb278974b127. May 17 00:29:15.465205 containerd[1490]: time="2025-05-17T00:29:15.465137258Z" level=info msg="StartContainer for \"d1e3ff7b9d18004f2a7c0c2176444c904947c2eb629d557ff33afb278974b127\" returns successfully" May 17 00:29:15.472253 systemd[1]: cri-containerd-d1e3ff7b9d18004f2a7c0c2176444c904947c2eb629d557ff33afb278974b127.scope: Deactivated successfully. May 17 00:29:15.506554 containerd[1490]: time="2025-05-17T00:29:15.506436979Z" level=info msg="shim disconnected" id=d1e3ff7b9d18004f2a7c0c2176444c904947c2eb629d557ff33afb278974b127 namespace=k8s.io May 17 00:29:15.506554 containerd[1490]: time="2025-05-17T00:29:15.506523552Z" level=warning msg="cleaning up after shim disconnected" id=d1e3ff7b9d18004f2a7c0c2176444c904947c2eb629d557ff33afb278974b127 namespace=k8s.io May 17 00:29:15.506554 containerd[1490]: time="2025-05-17T00:29:15.506538400Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:29:16.018166 kubelet[2705]: I0517 00:29:16.018088 2705 setters.go:602] "Node became not ready" node="ci-4081-3-3-n-decaff31fa" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-17T00:29:16Z","lastTransitionTime":"2025-05-17T00:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 17 00:29:16.055134 sshd[4628]: Accepted publickey for core from 139.178.89.65 port 36990 ssh2: RSA SHA256:kFcxshSye1IppED0G84lz4/lbUrZJ1wq7wf6p1uuNAE May 17 00:29:16.056979 sshd[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:29:16.062310 systemd-logind[1474]: New session 23 of user core. May 17 00:29:16.070798 systemd[1]: Started session-23.scope - Session 23 of User core. May 17 00:29:16.137049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1e3ff7b9d18004f2a7c0c2176444c904947c2eb629d557ff33afb278974b127-rootfs.mount: Deactivated successfully. May 17 00:29:16.343074 containerd[1490]: time="2025-05-17T00:29:16.342920875Z" level=info msg="CreateContainer within sandbox \"39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 17 00:29:16.364764 containerd[1490]: time="2025-05-17T00:29:16.363675057Z" level=info msg="CreateContainer within sandbox \"39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f6fc2f7962ef5f8df6b57b349faed45bab026acf4490fb2f867d11538d98e2d\"" May 17 00:29:16.367065 containerd[1490]: time="2025-05-17T00:29:16.367033485Z" level=info msg="StartContainer for \"3f6fc2f7962ef5f8df6b57b349faed45bab026acf4490fb2f867d11538d98e2d\"" May 17 00:29:16.368081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969537512.mount: Deactivated successfully. May 17 00:29:16.412801 systemd[1]: Started cri-containerd-3f6fc2f7962ef5f8df6b57b349faed45bab026acf4490fb2f867d11538d98e2d.scope - libcontainer container 3f6fc2f7962ef5f8df6b57b349faed45bab026acf4490fb2f867d11538d98e2d. May 17 00:29:16.441275 systemd[1]: cri-containerd-3f6fc2f7962ef5f8df6b57b349faed45bab026acf4490fb2f867d11538d98e2d.scope: Deactivated successfully. May 17 00:29:16.443036 containerd[1490]: time="2025-05-17T00:29:16.442819882Z" level=info msg="StartContainer for \"3f6fc2f7962ef5f8df6b57b349faed45bab026acf4490fb2f867d11538d98e2d\" returns successfully" May 17 00:29:16.473923 containerd[1490]: time="2025-05-17T00:29:16.473854382Z" level=info msg="shim disconnected" id=3f6fc2f7962ef5f8df6b57b349faed45bab026acf4490fb2f867d11538d98e2d namespace=k8s.io May 17 00:29:16.473923 containerd[1490]: time="2025-05-17T00:29:16.473920827Z" level=warning msg="cleaning up after shim disconnected" id=3f6fc2f7962ef5f8df6b57b349faed45bab026acf4490fb2f867d11538d98e2d namespace=k8s.io May 17 00:29:16.473923 containerd[1490]: time="2025-05-17T00:29:16.473932328Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:29:17.136315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f6fc2f7962ef5f8df6b57b349faed45bab026acf4490fb2f867d11538d98e2d-rootfs.mount: Deactivated successfully. May 17 00:29:17.349791 containerd[1490]: time="2025-05-17T00:29:17.349731654Z" level=info msg="CreateContainer within sandbox \"39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 17 00:29:17.379194 containerd[1490]: time="2025-05-17T00:29:17.378993270Z" level=info msg="CreateContainer within sandbox \"39d25399de97fbf3ab9544b051ae2e5d1ad8ef5bbddcb34b98cd884826bb6537\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f8a5e12ed8a33d0fbee019603a89c958080dff56a46369fa3f3657fa8b70272c\"" May 17 00:29:17.380885 containerd[1490]: time="2025-05-17T00:29:17.380843339Z" level=info msg="StartContainer for \"f8a5e12ed8a33d0fbee019603a89c958080dff56a46369fa3f3657fa8b70272c\"" May 17 00:29:17.421799 systemd[1]: Started cri-containerd-f8a5e12ed8a33d0fbee019603a89c958080dff56a46369fa3f3657fa8b70272c.scope - libcontainer container f8a5e12ed8a33d0fbee019603a89c958080dff56a46369fa3f3657fa8b70272c. May 17 00:29:17.455946 containerd[1490]: time="2025-05-17T00:29:17.455866235Z" level=info msg="StartContainer for \"f8a5e12ed8a33d0fbee019603a89c958080dff56a46369fa3f3657fa8b70272c\" returns successfully" May 17 00:29:17.933789 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 17 00:29:18.383901 kubelet[2705]: I0517 00:29:18.383683 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x4x2q" podStartSLOduration=6.383639821 podStartE2EDuration="6.383639821s" podCreationTimestamp="2025-05-17 00:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:29:18.382360863 +0000 UTC m=+340.200951907" watchObservedRunningTime="2025-05-17 00:29:18.383639821 +0000 UTC m=+340.202230876" May 17 00:29:21.274220 systemd-networkd[1395]: lxc_health: Link UP May 17 00:29:21.302762 systemd-networkd[1395]: lxc_health: Gained carrier May 17 00:29:23.191824 systemd-networkd[1395]: lxc_health: Gained IPv6LL May 17 00:29:23.530972 systemd[1]: run-containerd-runc-k8s.io-f8a5e12ed8a33d0fbee019603a89c958080dff56a46369fa3f3657fa8b70272c-runc.ulqSvM.mount: Deactivated successfully. May 17 00:29:28.080423 sshd[4628]: pam_unix(sshd:session): session closed for user core May 17 00:29:28.084885 systemd[1]: sshd@22-37.27.213.195:22-139.178.89.65:36990.service: Deactivated successfully. May 17 00:29:28.087676 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:29:28.089846 systemd-logind[1474]: Session 23 logged out. Waiting for processes to exit. May 17 00:29:28.092139 systemd-logind[1474]: Removed session 23.