Jun 20 18:52:43.818392 kernel: Linux version 6.6.94-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:12:40 -00 2025 Jun 20 18:52:43.818411 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:52:43.818420 kernel: BIOS-provided physical RAM map: Jun 20 18:52:43.818426 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 20 18:52:43.818430 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 20 18:52:43.818435 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 20 18:52:43.818482 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Jun 20 18:52:43.818487 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Jun 20 18:52:43.818495 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jun 20 18:52:43.818500 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jun 20 18:52:43.818505 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 18:52:43.818510 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 20 18:52:43.818515 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 18:52:43.818520 kernel: NX (Execute Disable) protection: active Jun 20 18:52:43.818527 kernel: APIC: Static calls initialized Jun 20 18:52:43.818533 kernel: SMBIOS 3.0.0 present. Jun 20 18:52:43.818539 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jun 20 18:52:43.818544 kernel: Hypervisor detected: KVM Jun 20 18:52:43.818550 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 18:52:43.818555 kernel: kvm-clock: using sched offset of 3024101996 cycles Jun 20 18:52:43.818561 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 18:52:43.818567 kernel: tsc: Detected 2445.404 MHz processor Jun 20 18:52:43.818572 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 18:52:43.818578 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 18:52:43.818585 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Jun 20 18:52:43.818591 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 20 18:52:43.818597 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 18:52:43.818602 kernel: Using GB pages for direct mapping Jun 20 18:52:43.818608 kernel: ACPI: Early table checksum verification disabled Jun 20 18:52:43.818613 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Jun 20 18:52:43.818619 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:52:43.818625 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:52:43.818631 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:52:43.818637 kernel: ACPI: FACS 0x000000007CFE0000 000040 Jun 20 18:52:43.818643 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:52:43.818648 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:52:43.818654 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:52:43.818660 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 18:52:43.818665 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Jun 20 18:52:43.818671 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Jun 20 18:52:43.818680 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Jun 20 18:52:43.818686 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Jun 20 18:52:43.818691 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Jun 20 18:52:43.818697 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Jun 20 18:52:43.818703 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Jun 20 18:52:43.818709 kernel: No NUMA configuration found Jun 20 18:52:43.818714 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Jun 20 18:52:43.818721 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Jun 20 18:52:43.818727 kernel: Zone ranges: Jun 20 18:52:43.818733 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 18:52:43.818739 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Jun 20 18:52:43.818745 kernel: Normal empty Jun 20 18:52:43.818750 kernel: Movable zone start for each node Jun 20 18:52:43.818756 kernel: Early memory node ranges Jun 20 18:52:43.818762 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 20 18:52:43.818768 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Jun 20 18:52:43.818775 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Jun 20 18:52:43.818780 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 18:52:43.818786 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 18:52:43.818792 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jun 20 18:52:43.818798 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 18:52:43.818803 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 18:52:43.818809 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 18:52:43.818815 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 18:52:43.818821 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 18:52:43.818827 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 18:52:43.818833 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 18:52:43.818839 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 18:52:43.818845 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 18:52:43.818851 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 18:52:43.818857 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 20 18:52:43.818862 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 18:52:43.818868 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jun 20 18:52:43.818874 kernel: Booting paravirtualized kernel on KVM Jun 20 18:52:43.818880 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 18:52:43.818887 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 20 18:52:43.818893 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u1048576 Jun 20 18:52:43.818898 kernel: pcpu-alloc: s197096 r8192 d32280 u1048576 alloc=1*2097152 Jun 20 18:52:43.818904 kernel: pcpu-alloc: [0] 0 1 Jun 20 18:52:43.818910 kernel: kvm-guest: PV spinlocks disabled, no host support Jun 20 18:52:43.818916 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:52:43.818923 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 18:52:43.818937 kernel: random: crng init done Jun 20 18:52:43.818944 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 18:52:43.818950 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 20 18:52:43.818956 kernel: Fallback order for Node 0: 0 Jun 20 18:52:43.818961 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Jun 20 18:52:43.818967 kernel: Policy zone: DMA32 Jun 20 18:52:43.818973 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 18:52:43.818979 kernel: Memory: 1920004K/2047464K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43488K init, 1588K bss, 127200K reserved, 0K cma-reserved) Jun 20 18:52:43.818985 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 20 18:52:43.818991 kernel: ftrace: allocating 37938 entries in 149 pages Jun 20 18:52:43.818998 kernel: ftrace: allocated 149 pages with 4 groups Jun 20 18:52:43.819004 kernel: Dynamic Preempt: voluntary Jun 20 18:52:43.819010 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 18:52:43.819016 kernel: rcu: RCU event tracing is enabled. Jun 20 18:52:43.819022 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 20 18:52:43.819028 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 18:52:43.819034 kernel: Rude variant of Tasks RCU enabled. Jun 20 18:52:43.819040 kernel: Tracing variant of Tasks RCU enabled. Jun 20 18:52:43.819046 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 18:52:43.819053 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 20 18:52:43.819059 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 20 18:52:43.819065 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 18:52:43.819070 kernel: Console: colour VGA+ 80x25 Jun 20 18:52:43.819076 kernel: printk: console [tty0] enabled Jun 20 18:52:43.819082 kernel: printk: console [ttyS0] enabled Jun 20 18:52:43.819088 kernel: ACPI: Core revision 20230628 Jun 20 18:52:43.819093 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 20 18:52:43.819099 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 18:52:43.819106 kernel: x2apic enabled Jun 20 18:52:43.819112 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 18:52:43.819118 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 18:52:43.819124 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jun 20 18:52:43.819129 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Jun 20 18:52:43.819135 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 18:52:43.819141 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 20 18:52:43.819147 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 20 18:52:43.819158 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 18:52:43.819164 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 18:52:43.819170 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 18:52:43.819176 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 20 18:52:43.819183 kernel: RETBleed: Mitigation: untrained return thunk Jun 20 18:52:43.819189 kernel: Spectre V2 : User space: Vulnerable Jun 20 18:52:43.819195 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 18:52:43.819202 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 18:52:43.819208 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 18:52:43.819215 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 18:52:43.819221 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 18:52:43.819227 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 20 18:52:43.819234 kernel: Freeing SMP alternatives memory: 32K Jun 20 18:52:43.819240 kernel: pid_max: default: 32768 minimum: 301 Jun 20 18:52:43.819246 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jun 20 18:52:43.819252 kernel: landlock: Up and running. Jun 20 18:52:43.819258 kernel: SELinux: Initializing. Jun 20 18:52:43.819264 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 18:52:43.819271 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 20 18:52:43.819278 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 20 18:52:43.819284 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:52:43.819290 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:52:43.819296 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jun 20 18:52:43.819302 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 20 18:52:43.819308 kernel: ... version: 0 Jun 20 18:52:43.819314 kernel: ... bit width: 48 Jun 20 18:52:43.819322 kernel: ... generic registers: 6 Jun 20 18:52:43.819328 kernel: ... value mask: 0000ffffffffffff Jun 20 18:52:43.819334 kernel: ... max period: 00007fffffffffff Jun 20 18:52:43.819340 kernel: ... fixed-purpose events: 0 Jun 20 18:52:43.819357 kernel: ... event mask: 000000000000003f Jun 20 18:52:43.819363 kernel: signal: max sigframe size: 1776 Jun 20 18:52:43.819369 kernel: rcu: Hierarchical SRCU implementation. Jun 20 18:52:43.819376 kernel: rcu: Max phase no-delay instances is 400. Jun 20 18:52:43.819382 kernel: smp: Bringing up secondary CPUs ... Jun 20 18:52:43.819388 kernel: smpboot: x86: Booting SMP configuration: Jun 20 18:52:43.819395 kernel: .... node #0, CPUs: #1 Jun 20 18:52:43.819401 kernel: smp: Brought up 1 node, 2 CPUs Jun 20 18:52:43.819407 kernel: smpboot: Max logical packages: 1 Jun 20 18:52:43.819413 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Jun 20 18:52:43.819419 kernel: devtmpfs: initialized Jun 20 18:52:43.819425 kernel: x86/mm: Memory block size: 128MB Jun 20 18:52:43.819432 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 18:52:43.819450 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 20 18:52:43.819456 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 18:52:43.819941 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 18:52:43.819950 kernel: audit: initializing netlink subsys (disabled) Jun 20 18:52:43.819956 kernel: audit: type=2000 audit(1750445563.594:1): state=initialized audit_enabled=0 res=1 Jun 20 18:52:43.819963 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 18:52:43.819969 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 18:52:43.819975 kernel: cpuidle: using governor menu Jun 20 18:52:43.819981 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 18:52:43.819987 kernel: dca service started, version 1.12.1 Jun 20 18:52:43.819994 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jun 20 18:52:43.820003 kernel: PCI: Using configuration type 1 for base access Jun 20 18:52:43.820009 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 18:52:43.820015 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 18:52:43.820021 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 18:52:43.820027 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 18:52:43.820034 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 18:52:43.820040 kernel: ACPI: Added _OSI(Module Device) Jun 20 18:52:43.820046 kernel: ACPI: Added _OSI(Processor Device) Jun 20 18:52:43.820052 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 18:52:43.820059 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 18:52:43.820065 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 20 18:52:43.820071 kernel: ACPI: Interpreter enabled Jun 20 18:52:43.820077 kernel: ACPI: PM: (supports S0 S5) Jun 20 18:52:43.820084 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 18:52:43.820090 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 18:52:43.820096 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 18:52:43.820102 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 20 18:52:43.820108 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 18:52:43.820221 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 18:52:43.820292 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jun 20 18:52:43.820368 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jun 20 18:52:43.820378 kernel: PCI host bridge to bus 0000:00 Jun 20 18:52:43.820476 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 18:52:43.820542 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 18:52:43.820605 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 18:52:43.820662 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Jun 20 18:52:43.820720 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jun 20 18:52:43.820870 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jun 20 18:52:43.820980 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 18:52:43.821060 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jun 20 18:52:43.821134 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Jun 20 18:52:43.821205 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Jun 20 18:52:43.821269 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Jun 20 18:52:43.821333 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Jun 20 18:52:43.821411 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Jun 20 18:52:43.821496 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 18:52:43.821602 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jun 20 18:52:43.821677 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Jun 20 18:52:43.821748 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jun 20 18:52:43.821813 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Jun 20 18:52:43.821883 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jun 20 18:52:43.821948 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Jun 20 18:52:43.822022 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jun 20 18:52:43.822092 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Jun 20 18:52:43.822163 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jun 20 18:52:43.822228 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Jun 20 18:52:43.822297 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jun 20 18:52:43.822375 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Jun 20 18:52:43.822485 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jun 20 18:52:43.822564 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Jun 20 18:52:43.822636 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jun 20 18:52:43.822702 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Jun 20 18:52:43.822772 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jun 20 18:52:43.822836 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Jun 20 18:52:43.822905 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jun 20 18:52:43.822971 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 20 18:52:43.823044 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jun 20 18:52:43.823108 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Jun 20 18:52:43.823171 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Jun 20 18:52:43.823242 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jun 20 18:52:43.823307 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jun 20 18:52:43.823396 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jun 20 18:52:43.823493 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Jun 20 18:52:43.823563 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jun 20 18:52:43.823628 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Jun 20 18:52:43.823691 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jun 20 18:52:43.823755 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jun 20 18:52:43.823819 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jun 20 18:52:43.823891 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jun 20 18:52:43.823962 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Jun 20 18:52:43.824027 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jun 20 18:52:43.824090 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jun 20 18:52:43.824153 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jun 20 18:52:43.824226 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jun 20 18:52:43.824293 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Jun 20 18:52:43.824373 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Jun 20 18:52:43.824468 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jun 20 18:52:43.824541 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jun 20 18:52:43.824606 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 20 18:52:43.824680 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jun 20 18:52:43.824747 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jun 20 18:52:43.824812 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jun 20 18:52:43.824876 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jun 20 18:52:43.824944 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 20 18:52:43.825017 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jun 20 18:52:43.825084 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Jun 20 18:52:43.825150 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Jun 20 18:52:43.825214 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jun 20 18:52:43.825276 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jun 20 18:52:43.825339 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 20 18:52:43.827863 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jun 20 18:52:43.827953 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Jun 20 18:52:43.828024 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Jun 20 18:52:43.828090 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jun 20 18:52:43.828154 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jun 20 18:52:43.828216 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 20 18:52:43.828225 kernel: acpiphp: Slot [0] registered Jun 20 18:52:43.828295 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jun 20 18:52:43.828382 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Jun 20 18:52:43.829096 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Jun 20 18:52:43.829178 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Jun 20 18:52:43.829244 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jun 20 18:52:43.829309 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jun 20 18:52:43.829389 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 20 18:52:43.829399 kernel: acpiphp: Slot [0-2] registered Jun 20 18:52:43.829818 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jun 20 18:52:43.829899 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jun 20 18:52:43.829965 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 20 18:52:43.829974 kernel: acpiphp: Slot [0-3] registered Jun 20 18:52:43.830036 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jun 20 18:52:43.830100 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jun 20 18:52:43.830163 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 20 18:52:43.830172 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 18:52:43.830179 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 18:52:43.830189 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 18:52:43.830195 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 18:52:43.830201 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 20 18:52:43.830208 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 20 18:52:43.830214 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 20 18:52:43.830220 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 20 18:52:43.830226 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 20 18:52:43.830232 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 20 18:52:43.830238 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 20 18:52:43.830245 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 20 18:52:43.830251 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 20 18:52:43.830257 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 20 18:52:43.830264 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 20 18:52:43.830270 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 20 18:52:43.830276 kernel: iommu: Default domain type: Translated Jun 20 18:52:43.830282 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 18:52:43.830288 kernel: PCI: Using ACPI for IRQ routing Jun 20 18:52:43.830294 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 18:52:43.830301 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 20 18:52:43.830307 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Jun 20 18:52:43.830389 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 20 18:52:43.832148 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 20 18:52:43.832225 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 18:52:43.832234 kernel: vgaarb: loaded Jun 20 18:52:43.832241 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 20 18:52:43.832247 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 20 18:52:43.832254 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 18:52:43.832263 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 18:52:43.832270 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 18:52:43.832276 kernel: pnp: PnP ACPI init Jun 20 18:52:43.832360 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jun 20 18:52:43.832372 kernel: pnp: PnP ACPI: found 5 devices Jun 20 18:52:43.832379 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 18:52:43.832385 kernel: NET: Registered PF_INET protocol family Jun 20 18:52:43.832391 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 18:52:43.832400 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 20 18:52:43.832407 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 18:52:43.832413 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 20 18:52:43.832419 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 20 18:52:43.832425 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 20 18:52:43.832432 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 18:52:43.832481 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 20 18:52:43.832489 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 18:52:43.832495 kernel: NET: Registered PF_XDP protocol family Jun 20 18:52:43.832574 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jun 20 18:52:43.832641 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jun 20 18:52:43.832707 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jun 20 18:52:43.832771 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Jun 20 18:52:43.832833 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Jun 20 18:52:43.832895 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Jun 20 18:52:43.832958 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jun 20 18:52:43.833024 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Jun 20 18:52:43.833089 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Jun 20 18:52:43.833152 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jun 20 18:52:43.833214 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Jun 20 18:52:43.833277 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jun 20 18:52:43.833339 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jun 20 18:52:43.833419 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Jun 20 18:52:43.833953 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 20 18:52:43.834030 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jun 20 18:52:43.834095 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Jun 20 18:52:43.834159 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 20 18:52:43.834222 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jun 20 18:52:43.834285 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Jun 20 18:52:43.834358 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 20 18:52:43.834431 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jun 20 18:52:43.835736 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Jun 20 18:52:43.835809 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 20 18:52:43.835874 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jun 20 18:52:43.835937 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jun 20 18:52:43.836001 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Jun 20 18:52:43.836064 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 20 18:52:43.836127 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jun 20 18:52:43.836190 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jun 20 18:52:43.836254 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Jun 20 18:52:43.836318 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 20 18:52:43.836401 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jun 20 18:52:43.836867 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jun 20 18:52:43.836938 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Jun 20 18:52:43.837009 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 20 18:52:43.837069 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 18:52:43.837126 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 18:52:43.837185 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 18:52:43.837242 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Jun 20 18:52:43.837297 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jun 20 18:52:43.837364 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jun 20 18:52:43.838060 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Jun 20 18:52:43.838143 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Jun 20 18:52:43.838213 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Jun 20 18:52:43.838276 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jun 20 18:52:43.838342 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Jun 20 18:52:43.838420 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jun 20 18:52:43.838519 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Jun 20 18:52:43.838582 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jun 20 18:52:43.838645 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Jun 20 18:52:43.838703 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jun 20 18:52:43.838765 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Jun 20 18:52:43.838824 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jun 20 18:52:43.838893 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jun 20 18:52:43.838952 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Jun 20 18:52:43.839010 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jun 20 18:52:43.839074 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jun 20 18:52:43.839133 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Jun 20 18:52:43.839190 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jun 20 18:52:43.839255 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jun 20 18:52:43.839318 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Jun 20 18:52:43.839390 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jun 20 18:52:43.839401 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 20 18:52:43.839408 kernel: PCI: CLS 0 bytes, default 64 Jun 20 18:52:43.839414 kernel: Initialise system trusted keyrings Jun 20 18:52:43.839421 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 20 18:52:43.839427 kernel: Key type asymmetric registered Jun 20 18:52:43.839434 kernel: Asymmetric key parser 'x509' registered Jun 20 18:52:43.841899 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 20 18:52:43.841911 kernel: io scheduler mq-deadline registered Jun 20 18:52:43.841918 kernel: io scheduler kyber registered Jun 20 18:52:43.841925 kernel: io scheduler bfq registered Jun 20 18:52:43.842011 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jun 20 18:52:43.842082 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jun 20 18:52:43.842149 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jun 20 18:52:43.842213 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jun 20 18:52:43.842276 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jun 20 18:52:43.842344 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jun 20 18:52:43.842427 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jun 20 18:52:43.842581 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jun 20 18:52:43.842652 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jun 20 18:52:43.842748 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jun 20 18:52:43.842847 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jun 20 18:52:43.842913 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jun 20 18:52:43.842977 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jun 20 18:52:43.843046 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jun 20 18:52:43.843111 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jun 20 18:52:43.843174 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jun 20 18:52:43.843184 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 20 18:52:43.843247 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jun 20 18:52:43.843311 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jun 20 18:52:43.843321 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 18:52:43.843328 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jun 20 18:52:43.843335 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 18:52:43.843344 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 18:52:43.843363 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 18:52:43.843370 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 18:52:43.843376 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 18:52:43.843383 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 18:52:43.843472 kernel: rtc_cmos 00:03: RTC can wake from S4 Jun 20 18:52:43.843539 kernel: rtc_cmos 00:03: registered as rtc0 Jun 20 18:52:43.843600 kernel: rtc_cmos 00:03: setting system clock to 2025-06-20T18:52:43 UTC (1750445563) Jun 20 18:52:43.843667 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jun 20 18:52:43.843677 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 20 18:52:43.843684 kernel: NET: Registered PF_INET6 protocol family Jun 20 18:52:43.843690 kernel: Segment Routing with IPv6 Jun 20 18:52:43.843697 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 18:52:43.843704 kernel: NET: Registered PF_PACKET protocol family Jun 20 18:52:43.843710 kernel: Key type dns_resolver registered Jun 20 18:52:43.843717 kernel: IPI shorthand broadcast: enabled Jun 20 18:52:43.843726 kernel: sched_clock: Marking stable (1021005860, 136406103)->(1164223163, -6811200) Jun 20 18:52:43.843732 kernel: registered taskstats version 1 Jun 20 18:52:43.843739 kernel: Loading compiled-in X.509 certificates Jun 20 18:52:43.843745 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.94-flatcar: 583832681762bbd3c2cbcca308896cbba88c4497' Jun 20 18:52:43.843752 kernel: Key type .fscrypt registered Jun 20 18:52:43.843758 kernel: Key type fscrypt-provisioning registered Jun 20 18:52:43.843765 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 18:52:43.843772 kernel: ima: Allocated hash algorithm: sha1 Jun 20 18:52:43.843778 kernel: ima: No architecture policies found Jun 20 18:52:43.843786 kernel: clk: Disabling unused clocks Jun 20 18:52:43.843793 kernel: Freeing unused kernel image (initmem) memory: 43488K Jun 20 18:52:43.843800 kernel: Write protecting the kernel read-only data: 38912k Jun 20 18:52:43.843806 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jun 20 18:52:43.843813 kernel: Run /init as init process Jun 20 18:52:43.843819 kernel: with arguments: Jun 20 18:52:43.843826 kernel: /init Jun 20 18:52:43.843833 kernel: with environment: Jun 20 18:52:43.843839 kernel: HOME=/ Jun 20 18:52:43.843847 kernel: TERM=linux Jun 20 18:52:43.843853 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 18:52:43.843861 systemd[1]: Successfully made /usr/ read-only. Jun 20 18:52:43.843871 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:52:43.843879 systemd[1]: Detected virtualization kvm. Jun 20 18:52:43.843885 systemd[1]: Detected architecture x86-64. Jun 20 18:52:43.843892 systemd[1]: Running in initrd. Jun 20 18:52:43.843899 systemd[1]: No hostname configured, using default hostname. Jun 20 18:52:43.843907 systemd[1]: Hostname set to . Jun 20 18:52:43.843915 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:52:43.843922 systemd[1]: Queued start job for default target initrd.target. Jun 20 18:52:43.843929 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:52:43.843936 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:52:43.843943 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 18:52:43.843950 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:52:43.843958 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 18:52:43.843967 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 18:52:43.843975 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 18:52:43.843982 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 18:52:43.843989 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:52:43.843996 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:52:43.844002 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:52:43.844010 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:52:43.844017 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:52:43.844024 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:52:43.844031 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:52:43.844038 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:52:43.844045 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 18:52:43.844052 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 18:52:43.844059 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:52:43.844066 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:52:43.844074 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:52:43.844081 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:52:43.844088 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 18:52:43.844095 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:52:43.844102 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 18:52:43.844109 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 18:52:43.844116 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:52:43.844122 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:52:43.844145 systemd-journald[187]: Collecting audit messages is disabled. Jun 20 18:52:43.844165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:52:43.844172 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 18:52:43.844179 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:52:43.844188 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 18:52:43.844196 systemd-journald[187]: Journal started Jun 20 18:52:43.844212 systemd-journald[187]: Runtime Journal (/run/log/journal/5ee2dc81aa7b45dc97141d9bda2d875a) is 4.8M, max 38.3M, 33.5M free. Jun 20 18:52:43.845796 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 18:52:43.819923 systemd-modules-load[188]: Inserted module 'overlay' Jun 20 18:52:43.887745 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 18:52:43.887764 kernel: Bridge firewalling registered Jun 20 18:52:43.887773 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:52:43.854477 systemd-modules-load[188]: Inserted module 'br_netfilter' Jun 20 18:52:43.888365 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:52:43.889154 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:43.890158 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 18:52:43.896563 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:52:43.898411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:52:43.906560 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:52:43.909527 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:52:43.910828 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:52:43.911400 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:52:43.913276 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:52:43.915739 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:52:43.920537 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 18:52:43.924546 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:52:43.927237 dracut-cmdline[223]: dracut-dracut-053 Jun 20 18:52:43.931232 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c5ce7ee72c13e935b8a741ba19830125b417ea1672f46b6a215da9317cee8e17 Jun 20 18:52:43.957290 systemd-resolved[225]: Positive Trust Anchors: Jun 20 18:52:43.958158 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:52:43.958204 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:52:43.969527 systemd-resolved[225]: Defaulting to hostname 'linux'. Jun 20 18:52:43.970603 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:52:43.971540 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:52:43.995482 kernel: SCSI subsystem initialized Jun 20 18:52:44.003476 kernel: Loading iSCSI transport class v2.0-870. Jun 20 18:52:44.011458 kernel: iscsi: registered transport (tcp) Jun 20 18:52:44.027684 kernel: iscsi: registered transport (qla4xxx) Jun 20 18:52:44.027721 kernel: QLogic iSCSI HBA Driver Jun 20 18:52:44.051887 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 18:52:44.058605 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 18:52:44.077062 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 18:52:44.078703 kernel: device-mapper: uevent: version 1.0.3 Jun 20 18:52:44.078718 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 20 18:52:44.115478 kernel: raid6: avx2x4 gen() 31888 MB/s Jun 20 18:52:44.129467 kernel: raid6: avx2x2 gen() 30684 MB/s Jun 20 18:52:44.146556 kernel: raid6: avx2x1 gen() 21978 MB/s Jun 20 18:52:44.146581 kernel: raid6: using algorithm avx2x4 gen() 31888 MB/s Jun 20 18:52:44.164653 kernel: raid6: .... xor() 4416 MB/s, rmw enabled Jun 20 18:52:44.164686 kernel: raid6: using avx2x2 recovery algorithm Jun 20 18:52:44.181470 kernel: xor: automatically using best checksumming function avx Jun 20 18:52:44.295489 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 18:52:44.304987 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:52:44.314617 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:52:44.324616 systemd-udevd[408]: Using default interface naming scheme 'v255'. Jun 20 18:52:44.327848 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:52:44.337599 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 18:52:44.348616 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Jun 20 18:52:44.372052 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:52:44.378612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:52:44.418265 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:52:44.425559 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 18:52:44.436654 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 18:52:44.438528 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:52:44.438986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:52:44.439418 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:52:44.445555 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 18:52:44.455609 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:52:44.490022 kernel: scsi host0: Virtio SCSI HBA Jun 20 18:52:44.490206 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jun 20 18:52:44.508462 kernel: ACPI: bus type USB registered Jun 20 18:52:44.508515 kernel: usbcore: registered new interface driver usbfs Jun 20 18:52:44.509502 kernel: usbcore: registered new interface driver hub Jun 20 18:52:44.510521 kernel: usbcore: registered new device driver usb Jun 20 18:52:44.522469 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 18:52:44.532712 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:52:44.532813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:52:44.533372 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:52:44.533876 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:52:44.533962 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:44.536702 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:52:44.563054 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:52:44.581385 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jun 20 18:52:44.582301 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jun 20 18:52:44.582408 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jun 20 18:52:44.584945 kernel: libata version 3.00 loaded. Jun 20 18:52:44.588472 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jun 20 18:52:44.588656 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jun 20 18:52:44.588749 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jun 20 18:52:44.588835 kernel: hub 1-0:1.0: USB hub found Jun 20 18:52:44.590467 kernel: hub 1-0:1.0: 4 ports detected Jun 20 18:52:44.590575 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jun 20 18:52:44.590790 kernel: hub 2-0:1.0: USB hub found Jun 20 18:52:44.590883 kernel: hub 2-0:1.0: 4 ports detected Jun 20 18:52:44.597926 kernel: sd 0:0:0:0: Power-on or device reset occurred Jun 20 18:52:44.599067 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jun 20 18:52:44.599223 kernel: sd 0:0:0:0: [sda] Write Protect is off Jun 20 18:52:44.599749 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jun 20 18:52:44.599842 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jun 20 18:52:44.601963 kernel: AVX2 version of gcm_enc/dec engaged. Jun 20 18:52:44.601980 kernel: AES CTR mode by8 optimization enabled Jun 20 18:52:44.601994 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 18:52:44.602007 kernel: GPT:17805311 != 80003071 Jun 20 18:52:44.602015 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 18:52:44.602025 kernel: GPT:17805311 != 80003071 Jun 20 18:52:44.602037 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 18:52:44.602049 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:52:44.602636 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jun 20 18:52:44.609467 kernel: ahci 0000:00:1f.2: version 3.0 Jun 20 18:52:44.609612 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 20 18:52:44.611458 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jun 20 18:52:44.611576 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 20 18:52:44.615460 kernel: scsi host1: ahci Jun 20 18:52:44.615601 kernel: scsi host2: ahci Jun 20 18:52:44.615695 kernel: scsi host3: ahci Jun 20 18:52:44.616714 kernel: scsi host4: ahci Jun 20 18:52:44.618559 kernel: scsi host5: ahci Jun 20 18:52:44.618696 kernel: scsi host6: ahci Jun 20 18:52:44.618807 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 Jun 20 18:52:44.618818 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 Jun 20 18:52:44.618826 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 Jun 20 18:52:44.618834 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 Jun 20 18:52:44.618845 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 Jun 20 18:52:44.618853 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 Jun 20 18:52:44.648481 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (457) Jun 20 18:52:44.655463 kernel: BTRFS: device fsid 5ff786f3-14e2-4689-ad32-ff903cf13f91 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (456) Jun 20 18:52:44.663964 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jun 20 18:52:44.706528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:44.719107 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jun 20 18:52:44.726494 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 18:52:44.732562 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jun 20 18:52:44.733076 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jun 20 18:52:44.740616 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 18:52:44.743005 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 18:52:44.746129 disk-uuid[558]: Primary Header is updated. Jun 20 18:52:44.746129 disk-uuid[558]: Secondary Entries is updated. Jun 20 18:52:44.746129 disk-uuid[558]: Secondary Header is updated. Jun 20 18:52:44.755737 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:52:44.765458 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:52:44.777477 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:52:44.836472 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jun 20 18:52:44.926465 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 20 18:52:44.926542 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jun 20 18:52:44.929086 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 20 18:52:44.929452 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 20 18:52:44.931930 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 20 18:52:44.932452 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 20 18:52:44.933466 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 20 18:52:44.934767 kernel: ata1.00: applying bridge limits Jun 20 18:52:44.936639 kernel: ata1.00: configured for UDMA/100 Jun 20 18:52:44.937481 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 20 18:52:44.977474 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 20 18:52:44.983908 kernel: usbcore: registered new interface driver usbhid Jun 20 18:52:44.983948 kernel: usbhid: USB HID core driver Jun 20 18:52:44.983972 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 20 18:52:44.984295 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 18:52:44.984312 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jun 20 18:52:44.988818 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jun 20 18:52:44.991471 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jun 20 18:52:45.775536 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jun 20 18:52:45.776348 disk-uuid[561]: The operation has completed successfully. Jun 20 18:52:45.844679 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 18:52:45.844760 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 18:52:45.879547 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 18:52:45.882047 sh[598]: Success Jun 20 18:52:45.893508 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jun 20 18:52:45.941744 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 18:52:45.951577 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 18:52:45.952162 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 18:52:45.973078 kernel: BTRFS info (device dm-0): first mount of filesystem 5ff786f3-14e2-4689-ad32-ff903cf13f91 Jun 20 18:52:45.973119 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:52:45.973142 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 20 18:52:45.976611 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 20 18:52:45.978788 kernel: BTRFS info (device dm-0): using free space tree Jun 20 18:52:45.987471 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 20 18:52:45.989482 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 18:52:45.990975 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 18:52:46.002694 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 18:52:46.005500 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 18:52:46.030012 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:52:46.030050 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:52:46.030062 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:52:46.037811 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 18:52:46.037843 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:52:46.043463 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:52:46.044910 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 18:52:46.052568 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 18:52:46.071857 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:52:46.079690 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:52:46.100497 systemd-networkd[776]: lo: Link UP Jun 20 18:52:46.100503 systemd-networkd[776]: lo: Gained carrier Jun 20 18:52:46.103020 systemd-networkd[776]: Enumeration completed Jun 20 18:52:46.103156 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:52:46.103678 systemd[1]: Reached target network.target - Network. Jun 20 18:52:46.104708 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:46.104711 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:52:46.106268 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:46.106271 systemd-networkd[776]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:52:46.107534 systemd-networkd[776]: eth0: Link UP Jun 20 18:52:46.107537 systemd-networkd[776]: eth0: Gained carrier Jun 20 18:52:46.107542 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:46.112632 systemd-networkd[776]: eth1: Link UP Jun 20 18:52:46.112635 systemd-networkd[776]: eth1: Gained carrier Jun 20 18:52:46.112641 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:46.129155 ignition[723]: Ignition 2.20.0 Jun 20 18:52:46.129167 ignition[723]: Stage: fetch-offline Jun 20 18:52:46.129191 ignition[723]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:46.130850 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:52:46.129197 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:52:46.129255 ignition[723]: parsed url from cmdline: "" Jun 20 18:52:46.129257 ignition[723]: no config URL provided Jun 20 18:52:46.129261 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:52:46.129266 ignition[723]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:52:46.129270 ignition[723]: failed to fetch config: resource requires networking Jun 20 18:52:46.129421 ignition[723]: Ignition finished successfully Jun 20 18:52:46.137852 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 20 18:52:46.145136 ignition[785]: Ignition 2.20.0 Jun 20 18:52:46.145144 ignition[785]: Stage: fetch Jun 20 18:52:46.145267 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:46.145275 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:52:46.145336 ignition[785]: parsed url from cmdline: "" Jun 20 18:52:46.145339 ignition[785]: no config URL provided Jun 20 18:52:46.145342 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 18:52:46.145348 ignition[785]: no config at "/usr/lib/ignition/user.ign" Jun 20 18:52:46.145376 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jun 20 18:52:46.145492 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jun 20 18:52:46.161485 systemd-networkd[776]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 18:52:46.179494 systemd-networkd[776]: eth0: DHCPv4 address 46.62.134.149/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jun 20 18:52:46.346257 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jun 20 18:52:46.355327 ignition[785]: GET result: OK Jun 20 18:52:46.355473 ignition[785]: parsing config with SHA512: f70ed09b2dd8d23d3a8aefd2488596bd4003c43843df91a3375eec2bb18cb6dbb0902bdd28d3bbbb97b24657c858231c2ba769cad365ecbbe50d664bfc34421b Jun 20 18:52:46.364153 unknown[785]: fetched base config from "system" Jun 20 18:52:46.364168 unknown[785]: fetched base config from "system" Jun 20 18:52:46.364826 ignition[785]: fetch: fetch complete Jun 20 18:52:46.364176 unknown[785]: fetched user config from "hetzner" Jun 20 18:52:46.364834 ignition[785]: fetch: fetch passed Jun 20 18:52:46.367107 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 20 18:52:46.364898 ignition[785]: Ignition finished successfully Jun 20 18:52:46.375704 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 18:52:46.395287 ignition[793]: Ignition 2.20.0 Jun 20 18:52:46.395306 ignition[793]: Stage: kargs Jun 20 18:52:46.395593 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:46.395609 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:52:46.398638 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 18:52:46.397208 ignition[793]: kargs: kargs passed Jun 20 18:52:46.397265 ignition[793]: Ignition finished successfully Jun 20 18:52:46.407642 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 18:52:46.422591 ignition[799]: Ignition 2.20.0 Jun 20 18:52:46.422610 ignition[799]: Stage: disks Jun 20 18:52:46.422914 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:46.425758 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 18:52:46.422926 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:52:46.433171 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 18:52:46.424069 ignition[799]: disks: disks passed Jun 20 18:52:46.434553 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 18:52:46.424114 ignition[799]: Ignition finished successfully Jun 20 18:52:46.436191 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:52:46.437961 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:52:46.439701 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:52:46.447633 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 18:52:46.461085 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jun 20 18:52:46.462673 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 18:52:46.468540 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 18:52:46.538462 kernel: EXT4-fs (sda9): mounted filesystem 943f8432-3dc9-4e22-b9bd-c29bf6a1f5e1 r/w with ordered data mode. Quota mode: none. Jun 20 18:52:46.539724 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 18:52:46.540924 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 18:52:46.547548 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:52:46.549516 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 18:52:46.552610 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jun 20 18:52:46.553333 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 18:52:46.553373 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:52:46.556371 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 18:52:46.559499 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 18:52:46.568469 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (816) Jun 20 18:52:46.572241 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:52:46.572272 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:52:46.575385 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:52:46.582464 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 18:52:46.582489 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:52:46.585894 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:52:46.603825 coreos-metadata[818]: Jun 20 18:52:46.603 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jun 20 18:52:46.605399 coreos-metadata[818]: Jun 20 18:52:46.605 INFO Fetch successful Jun 20 18:52:46.607451 coreos-metadata[818]: Jun 20 18:52:46.605 INFO wrote hostname ci-4230-2-0-5-00d7cf22d6 to /sysroot/etc/hostname Jun 20 18:52:46.607365 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:52:46.610181 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 18:52:46.612063 initrd-setup-root[852]: cut: /sysroot/etc/group: No such file or directory Jun 20 18:52:46.614951 initrd-setup-root[859]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 18:52:46.618876 initrd-setup-root[866]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 18:52:46.676477 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 18:52:46.681507 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 18:52:46.684338 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 18:52:46.689474 kernel: BTRFS info (device sda6): last unmount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:52:46.704038 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 18:52:46.705188 ignition[934]: INFO : Ignition 2.20.0 Jun 20 18:52:46.705188 ignition[934]: INFO : Stage: mount Jun 20 18:52:46.706205 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:46.706205 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:52:46.706205 ignition[934]: INFO : mount: mount passed Jun 20 18:52:46.708244 ignition[934]: INFO : Ignition finished successfully Jun 20 18:52:46.707283 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 18:52:46.713503 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 18:52:46.970246 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 18:52:46.977694 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 18:52:46.993517 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (945) Jun 20 18:52:46.999310 kernel: BTRFS info (device sda6): first mount of filesystem 0d4ae0d2-6537-4cbd-8c37-7b929dcf3a9f Jun 20 18:52:46.999393 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 18:52:47.003937 kernel: BTRFS info (device sda6): using free space tree Jun 20 18:52:47.012026 kernel: BTRFS info (device sda6): enabling ssd optimizations Jun 20 18:52:47.012077 kernel: BTRFS info (device sda6): auto enabling async discard Jun 20 18:52:47.017804 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 18:52:47.045129 ignition[962]: INFO : Ignition 2.20.0 Jun 20 18:52:47.045129 ignition[962]: INFO : Stage: files Jun 20 18:52:47.046328 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:47.046328 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:52:47.046328 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jun 20 18:52:47.049186 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 18:52:47.049186 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 18:52:47.051043 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 18:52:47.051043 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 18:52:47.052826 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 18:52:47.052826 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 18:52:47.052826 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jun 20 18:52:47.051224 unknown[962]: wrote ssh authorized keys file for user: core Jun 20 18:52:47.329256 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 18:52:47.947783 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jun 20 18:52:47.947783 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:52:47.949861 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 18:52:48.054724 systemd-networkd[776]: eth0: Gained IPv6LL Jun 20 18:52:48.118666 systemd-networkd[776]: eth1: Gained IPv6LL Jun 20 18:52:48.712702 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 18:52:48.959317 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:52:48.961423 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:52:48.991235 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jun 20 18:52:49.686863 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 18:52:49.957064 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jun 20 18:52:49.957064 ignition[962]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 18:52:49.959393 ignition[962]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:52:49.961414 ignition[962]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 18:52:49.961414 ignition[962]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 18:52:49.961414 ignition[962]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 20 18:52:49.961414 ignition[962]: INFO : files: op(e): op(f): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 18:52:49.961414 ignition[962]: INFO : files: op(e): op(f): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jun 20 18:52:49.961414 ignition[962]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 20 18:52:49.961414 ignition[962]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 20 18:52:49.961414 ignition[962]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 18:52:49.961414 ignition[962]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:52:49.961414 ignition[962]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 18:52:49.961414 ignition[962]: INFO : files: files passed Jun 20 18:52:49.961414 ignition[962]: INFO : Ignition finished successfully Jun 20 18:52:49.961214 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 18:52:49.973541 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 18:52:49.975526 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 18:52:49.976908 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 18:52:49.976965 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 18:52:49.985163 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:52:49.985163 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:52:49.987673 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 18:52:49.988110 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:52:49.989274 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 18:52:49.996554 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 18:52:50.016589 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 18:52:50.016678 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 18:52:50.017859 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 18:52:50.018694 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 18:52:50.019746 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 18:52:50.020867 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 18:52:50.030519 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:52:50.035568 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 18:52:50.042057 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:52:50.043199 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:52:50.043800 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 18:52:50.044759 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 18:52:50.044837 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 18:52:50.045926 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 18:52:50.046692 systemd[1]: Stopped target basic.target - Basic System. Jun 20 18:52:50.047827 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 18:52:50.048813 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 18:52:50.049730 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 18:52:50.050764 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 18:52:50.051818 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 18:52:50.052864 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 18:52:50.053810 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 18:52:50.054829 systemd[1]: Stopped target swap.target - Swaps. Jun 20 18:52:50.055752 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 18:52:50.055831 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 18:52:50.056893 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:52:50.057524 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:52:50.058371 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 18:52:50.060490 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:52:50.061013 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 18:52:50.061089 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 18:52:50.062378 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 18:52:50.062481 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 18:52:50.063064 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 18:52:50.063172 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 18:52:50.064033 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jun 20 18:52:50.064139 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jun 20 18:52:50.076591 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 18:52:50.077016 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 18:52:50.077137 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:52:50.080455 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 18:52:50.080873 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 18:52:50.080993 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:52:50.081591 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 18:52:50.081673 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 18:52:50.091168 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 18:52:50.091238 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 18:52:50.098261 ignition[1015]: INFO : Ignition 2.20.0 Jun 20 18:52:50.098261 ignition[1015]: INFO : Stage: umount Jun 20 18:52:50.098261 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 18:52:50.098261 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jun 20 18:52:50.098261 ignition[1015]: INFO : umount: umount passed Jun 20 18:52:50.098261 ignition[1015]: INFO : Ignition finished successfully Jun 20 18:52:50.098786 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 18:52:50.099206 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 18:52:50.099269 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 18:52:50.101160 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 18:52:50.101213 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 18:52:50.102196 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 18:52:50.102230 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 18:52:50.103023 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 20 18:52:50.103054 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 20 18:52:50.103856 systemd[1]: Stopped target network.target - Network. Jun 20 18:52:50.104670 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 18:52:50.104704 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 18:52:50.105584 systemd[1]: Stopped target paths.target - Path Units. Jun 20 18:52:50.106353 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 18:52:50.110603 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:52:50.111092 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 18:52:50.111907 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 18:52:50.112937 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 18:52:50.112964 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 18:52:50.114113 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 18:52:50.114141 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 18:52:50.115071 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 18:52:50.115107 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 18:52:50.115986 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 18:52:50.116017 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 18:52:50.117097 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 18:52:50.118040 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 18:52:50.120014 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 18:52:50.120095 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 18:52:50.121135 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 18:52:50.121186 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 18:52:50.125054 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 18:52:50.125125 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 18:52:50.127460 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 18:52:50.127687 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 18:52:50.127719 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:52:50.129182 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:52:50.129335 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 18:52:50.129464 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 18:52:50.131232 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 18:52:50.131552 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 18:52:50.131591 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:52:50.141521 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 18:52:50.142767 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 18:52:50.142817 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 18:52:50.143429 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:52:50.143490 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:52:50.144910 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 18:52:50.144954 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 18:52:50.145712 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:52:50.148309 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 18:52:50.155002 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 18:52:50.155109 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 18:52:50.161064 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 18:52:50.161198 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:52:50.162349 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 18:52:50.162394 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 18:52:50.163226 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 18:52:50.163248 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:52:50.164219 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 18:52:50.164252 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 18:52:50.165618 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 18:52:50.165654 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 18:52:50.166624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 18:52:50.166666 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 18:52:50.180581 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 18:52:50.181750 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 18:52:50.181803 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:52:50.183608 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:52:50.183645 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:50.184388 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 18:52:50.184462 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 18:52:50.185773 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 18:52:50.188485 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 18:52:50.195228 systemd[1]: Switching root. Jun 20 18:52:50.217533 systemd-journald[187]: Journal stopped Jun 20 18:52:51.013376 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Jun 20 18:52:51.013424 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 18:52:51.013435 kernel: SELinux: policy capability open_perms=1 Jun 20 18:52:51.013458 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 18:52:51.013466 kernel: SELinux: policy capability always_check_network=0 Jun 20 18:52:51.013477 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 18:52:51.013487 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 18:52:51.013495 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 18:52:51.013505 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 18:52:51.013513 kernel: audit: type=1403 audit(1750445570.360:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 18:52:51.013524 systemd[1]: Successfully loaded SELinux policy in 36.053ms. Jun 20 18:52:51.013541 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.344ms. Jun 20 18:52:51.013550 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 18:52:51.013559 systemd[1]: Detected virtualization kvm. Jun 20 18:52:51.013569 systemd[1]: Detected architecture x86-64. Jun 20 18:52:51.013577 systemd[1]: Detected first boot. Jun 20 18:52:51.013585 systemd[1]: Hostname set to . Jun 20 18:52:51.013594 systemd[1]: Initializing machine ID from VM UUID. Jun 20 18:52:51.013604 zram_generator::config[1060]: No configuration found. Jun 20 18:52:51.013616 kernel: Guest personality initialized and is inactive Jun 20 18:52:51.013624 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 18:52:51.013631 kernel: Initialized host personality Jun 20 18:52:51.013640 kernel: NET: Registered PF_VSOCK protocol family Jun 20 18:52:51.013648 systemd[1]: Populated /etc with preset unit settings. Jun 20 18:52:51.013657 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 18:52:51.013666 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 18:52:51.013674 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 18:52:51.013682 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 18:52:51.013690 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 18:52:51.013699 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 18:52:51.013707 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 18:52:51.013717 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 18:52:51.013726 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 18:52:51.013735 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 18:52:51.013743 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 18:52:51.013751 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 18:52:51.013760 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 18:52:51.013768 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 18:52:51.013776 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 18:52:51.013785 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 18:52:51.013794 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 18:52:51.013803 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 18:52:51.013812 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 18:52:51.013820 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 18:52:51.013828 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 18:52:51.013837 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 18:52:51.013846 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 18:52:51.013855 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 18:52:51.013863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 18:52:51.013872 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 18:52:51.013880 systemd[1]: Reached target slices.target - Slice Units. Jun 20 18:52:51.013889 systemd[1]: Reached target swap.target - Swaps. Jun 20 18:52:51.013897 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 18:52:51.013905 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 18:52:51.013913 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 18:52:51.013925 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 18:52:51.013935 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 18:52:51.013943 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 18:52:51.013951 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 18:52:51.013959 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 18:52:51.013968 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 18:52:51.013977 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 18:52:51.013985 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:52:51.013994 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 18:52:51.014002 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 18:52:51.014010 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 18:52:51.014019 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 18:52:51.014028 systemd[1]: Reached target machines.target - Containers. Jun 20 18:52:51.014036 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 18:52:51.014046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:52:51.014054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 18:52:51.014062 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 18:52:51.014070 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:52:51.014079 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:52:51.014088 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:52:51.014097 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 18:52:51.014105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:52:51.014114 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 18:52:51.014124 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 18:52:51.014132 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 18:52:51.014140 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 18:52:51.014149 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 18:52:51.014157 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:52:51.014165 kernel: loop: module loaded Jun 20 18:52:51.014174 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 18:52:51.014182 kernel: fuse: init (API version 7.39) Jun 20 18:52:51.014192 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 18:52:51.014200 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 18:52:51.014209 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 18:52:51.014217 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 18:52:51.014226 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 18:52:51.014234 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 18:52:51.014243 systemd[1]: Stopped verity-setup.service. Jun 20 18:52:51.014252 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:52:51.014274 systemd-journald[1151]: Collecting audit messages is disabled. Jun 20 18:52:51.014295 systemd-journald[1151]: Journal started Jun 20 18:52:51.014314 systemd-journald[1151]: Runtime Journal (/run/log/journal/5ee2dc81aa7b45dc97141d9bda2d875a) is 4.8M, max 38.3M, 33.5M free. Jun 20 18:52:50.781998 systemd[1]: Queued start job for default target multi-user.target. Jun 20 18:52:50.791397 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jun 20 18:52:50.791738 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 18:52:51.018990 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 18:52:51.019023 kernel: ACPI: bus type drm_connector registered Jun 20 18:52:51.025315 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 18:52:51.025831 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 18:52:51.026326 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 18:52:51.027554 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 18:52:51.028798 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 18:52:51.029393 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 18:52:51.030049 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 18:52:51.030794 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 18:52:51.031528 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 18:52:51.031706 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 18:52:51.032586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:52:51.032746 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:52:51.033427 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:52:51.033742 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:52:51.034383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:52:51.034631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:52:51.035293 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 18:52:51.035500 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 18:52:51.036178 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:52:51.036409 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:52:51.037179 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 18:52:51.037885 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 18:52:51.038700 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 18:52:51.044012 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 18:52:51.046927 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 18:52:51.051728 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 18:52:51.055045 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 18:52:51.056240 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 18:52:51.056317 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 18:52:51.057698 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 18:52:51.060986 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 18:52:51.067816 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 18:52:51.070719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:52:51.075299 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 18:52:51.076886 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 18:52:51.077617 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:52:51.080520 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 18:52:51.080993 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:52:51.082280 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:52:51.085799 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 18:52:51.091356 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 18:52:51.095867 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 18:52:51.098599 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 18:52:51.100063 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 18:52:51.101721 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 18:52:51.110141 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 18:52:51.114169 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 18:52:51.118468 kernel: loop0: detected capacity change from 0 to 8 Jun 20 18:52:51.120058 systemd-journald[1151]: Time spent on flushing to /var/log/journal/5ee2dc81aa7b45dc97141d9bda2d875a is 19.187ms for 1149 entries. Jun 20 18:52:51.120058 systemd-journald[1151]: System Journal (/var/log/journal/5ee2dc81aa7b45dc97141d9bda2d875a) is 8M, max 584.8M, 576.8M free. Jun 20 18:52:51.144591 systemd-journald[1151]: Received client request to flush runtime journal. Jun 20 18:52:51.144626 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 18:52:51.120210 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 18:52:51.135662 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 20 18:52:51.145604 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 18:52:51.163065 udevadm[1198]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 20 18:52:51.170298 kernel: loop1: detected capacity change from 0 to 147912 Jun 20 18:52:51.167254 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:52:51.177411 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 18:52:51.178895 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 18:52:51.184605 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 18:52:51.213461 kernel: loop2: detected capacity change from 0 to 138176 Jun 20 18:52:51.212756 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jun 20 18:52:51.212771 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Jun 20 18:52:51.218222 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 18:52:51.264470 kernel: loop3: detected capacity change from 0 to 229808 Jun 20 18:52:51.307552 kernel: loop4: detected capacity change from 0 to 8 Jun 20 18:52:51.310522 kernel: loop5: detected capacity change from 0 to 147912 Jun 20 18:52:51.329471 kernel: loop6: detected capacity change from 0 to 138176 Jun 20 18:52:51.349483 kernel: loop7: detected capacity change from 0 to 229808 Jun 20 18:52:51.373687 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jun 20 18:52:51.374151 (sd-merge)[1213]: Merged extensions into '/usr'. Jun 20 18:52:51.378038 systemd[1]: Reload requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 18:52:51.378054 systemd[1]: Reloading... Jun 20 18:52:51.449545 zram_generator::config[1241]: No configuration found. Jun 20 18:52:51.526837 ldconfig[1181]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 18:52:51.537480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:52:51.592662 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 18:52:51.593077 systemd[1]: Reloading finished in 214 ms. Jun 20 18:52:51.612166 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 18:52:51.613042 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 18:52:51.623589 systemd[1]: Starting ensure-sysext.service... Jun 20 18:52:51.627726 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 18:52:51.643815 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 18:52:51.647584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 18:52:51.648341 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Jun 20 18:52:51.648349 systemd[1]: Reloading... Jun 20 18:52:51.652544 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 18:52:51.652917 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 18:52:51.653578 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 18:52:51.653844 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jun 20 18:52:51.653943 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jun 20 18:52:51.656550 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:52:51.656618 systemd-tmpfiles[1285]: Skipping /boot Jun 20 18:52:51.663162 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 18:52:51.663221 systemd-tmpfiles[1285]: Skipping /boot Jun 20 18:52:51.690304 systemd-udevd[1287]: Using default interface naming scheme 'v255'. Jun 20 18:52:51.702463 zram_generator::config[1312]: No configuration found. Jun 20 18:52:51.817223 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:52:51.841468 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 18:52:51.856463 kernel: ACPI: button: Power Button [PWRF] Jun 20 18:52:51.876456 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 18:52:51.878727 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 18:52:51.878969 systemd[1]: Reloading finished in 230 ms. Jun 20 18:52:51.884540 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1334) Jun 20 18:52:51.889320 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 18:52:51.899040 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 18:52:51.917959 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jun 20 18:52:51.935730 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 20 18:52:51.935948 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jun 20 18:52:51.936065 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 20 18:52:51.940799 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jun 20 18:52:51.940832 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jun 20 18:52:51.948466 kernel: Console: switching to colour dummy device 80x25 Jun 20 18:52:51.950799 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jun 20 18:52:51.950828 kernel: [drm] features: -context_init Jun 20 18:52:51.953207 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:52:51.955591 kernel: [drm] number of scanouts: 1 Jun 20 18:52:51.955622 kernel: [drm] number of cap sets: 0 Jun 20 18:52:51.956728 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jun 20 18:52:51.959477 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jun 20 18:52:51.964077 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jun 20 18:52:51.964109 kernel: Console: switching to colour frame buffer device 160x50 Jun 20 18:52:51.965384 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:52:51.971939 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jun 20 18:52:51.981399 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 18:52:51.983148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:52:51.986507 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:52:51.995475 kernel: EDAC MC: Ver: 3.0.0 Jun 20 18:52:51.993611 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:52:51.995853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 18:52:51.998659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:52:51.998755 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:52:52.000643 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 18:52:52.003906 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 18:52:52.006960 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 18:52:52.009772 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 18:52:52.009838 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:52:52.011070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:52:52.011185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:52:52.012247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:52:52.012612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:52:52.035092 systemd[1]: Finished ensure-sysext.service. Jun 20 18:52:52.046346 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 18:52:52.046523 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 18:52:52.049686 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 18:52:52.053131 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jun 20 18:52:52.063013 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:52:52.063689 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 18:52:52.070619 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 18:52:52.073482 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 18:52:52.075294 augenrules[1429]: No rules Jun 20 18:52:52.075704 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 18:52:52.076135 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 18:52:52.079008 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 18:52:52.079107 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 18:52:52.081561 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 18:52:52.085867 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 18:52:52.087803 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:52:52.088431 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 18:52:52.090037 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:52:52.091488 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:52:52.092050 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 18:52:52.094271 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 18:52:52.094815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 18:52:52.098918 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 18:52:52.099052 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 18:52:52.099785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 18:52:52.101390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 18:52:52.103069 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 18:52:52.117610 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 18:52:52.117807 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 18:52:52.123609 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 18:52:52.127111 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 18:52:52.127267 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:52.130949 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 18:52:52.131115 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 18:52:52.133350 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 18:52:52.144601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 18:52:52.151908 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 18:52:52.157564 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 18:52:52.174777 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 20 18:52:52.182586 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 20 18:52:52.196524 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:52:52.219083 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 20 18:52:52.221979 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 18:52:52.225064 systemd-networkd[1403]: lo: Link UP Jun 20 18:52:52.225076 systemd-networkd[1403]: lo: Gained carrier Jun 20 18:52:52.225667 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 20 18:52:52.228986 systemd-networkd[1403]: Enumeration completed Jun 20 18:52:52.229067 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 18:52:52.230677 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:52.230681 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:52:52.233003 systemd-networkd[1403]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:52.233014 systemd-networkd[1403]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 18:52:52.233504 systemd-networkd[1403]: eth0: Link UP Jun 20 18:52:52.233513 systemd-networkd[1403]: eth0: Gained carrier Jun 20 18:52:52.233523 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:52.236535 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 18:52:52.238790 systemd-resolved[1405]: Positive Trust Anchors: Jun 20 18:52:52.238797 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 18:52:52.238822 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 18:52:52.240036 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 18:52:52.240661 systemd-networkd[1403]: eth1: Link UP Jun 20 18:52:52.240664 systemd-networkd[1403]: eth1: Gained carrier Jun 20 18:52:52.240677 systemd-networkd[1403]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 18:52:52.247516 systemd-resolved[1405]: Using system hostname 'ci-4230-2-0-5-00d7cf22d6'. Jun 20 18:52:52.249109 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 20 18:52:52.250549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 18:52:52.250943 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 18:52:52.251267 systemd[1]: Reached target network.target - Network. Jun 20 18:52:52.251610 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 18:52:52.251906 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 18:52:52.262783 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 18:52:52.269098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 18:52:52.270907 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 18:52:52.271358 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 18:52:52.271779 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 18:52:52.272284 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 18:52:52.272768 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 18:52:52.273164 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 18:52:52.273569 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 18:52:52.273642 systemd[1]: Reached target paths.target - Path Units. Jun 20 18:52:52.274078 systemd[1]: Reached target timers.target - Timer Units. Jun 20 18:52:52.277291 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 18:52:52.279202 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 18:52:52.283044 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 18:52:52.283606 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 18:52:52.284395 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 18:52:52.285878 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 18:52:52.286996 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 18:52:52.287885 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 20 18:52:52.288314 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 18:52:52.289156 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 18:52:52.289529 systemd[1]: Reached target basic.target - Basic System. Jun 20 18:52:52.289909 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:52:52.289939 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 18:52:52.290487 systemd-networkd[1403]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 18:52:52.291284 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Jun 20 18:52:52.291552 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 18:52:52.294566 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 20 18:52:52.303435 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 18:52:52.305319 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 18:52:52.307573 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 18:52:52.307898 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 18:52:52.309818 systemd-networkd[1403]: eth0: DHCPv4 address 46.62.134.149/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jun 20 18:52:52.309884 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 18:52:52.312208 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Jun 20 18:52:52.312700 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 18:52:52.316405 jq[1481]: false Jun 20 18:52:52.319877 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jun 20 18:52:52.328587 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 18:52:52.335554 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 18:52:52.335804 coreos-metadata[1479]: Jun 20 18:52:52.335 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jun 20 18:52:52.338284 coreos-metadata[1479]: Jun 20 18:52:52.338 INFO Fetch successful Jun 20 18:52:52.338284 coreos-metadata[1479]: Jun 20 18:52:52.338 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jun 20 18:52:52.339134 coreos-metadata[1479]: Jun 20 18:52:52.338 INFO Fetch successful Jun 20 18:52:52.339958 extend-filesystems[1484]: Found loop4 Jun 20 18:52:52.347343 extend-filesystems[1484]: Found loop5 Jun 20 18:52:52.347343 extend-filesystems[1484]: Found loop6 Jun 20 18:52:52.347343 extend-filesystems[1484]: Found loop7 Jun 20 18:52:52.347343 extend-filesystems[1484]: Found sda Jun 20 18:52:52.347343 extend-filesystems[1484]: Found sda1 Jun 20 18:52:52.347343 extend-filesystems[1484]: Found sda2 Jun 20 18:52:52.347343 extend-filesystems[1484]: Found sda3 Jun 20 18:52:52.347343 extend-filesystems[1484]: Found usr Jun 20 18:52:52.347343 extend-filesystems[1484]: Found sda4 Jun 20 18:52:52.347343 extend-filesystems[1484]: Found sda6 Jun 20 18:52:52.347343 extend-filesystems[1484]: Found sda7 Jun 20 18:52:52.347343 extend-filesystems[1484]: Found sda9 Jun 20 18:52:52.347343 extend-filesystems[1484]: Checking size of /dev/sda9 Jun 20 18:52:52.350701 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 18:52:52.358221 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 18:52:52.359561 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 18:52:52.361034 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 18:52:52.374592 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 18:52:52.377987 extend-filesystems[1484]: Resized partition /dev/sda9 Jun 20 18:52:52.381597 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 18:52:52.381757 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 18:52:52.385315 jq[1496]: true Jun 20 18:52:52.390911 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 18:52:52.391134 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 18:52:52.406754 extend-filesystems[1501]: resize2fs 1.47.1 (20-May-2024) Jun 20 18:52:52.415975 (ntainerd)[1508]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 18:52:52.423032 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jun 20 18:52:52.429146 update_engine[1494]: I20250620 18:52:52.427248 1494 main.cc:92] Flatcar Update Engine starting Jun 20 18:52:52.431673 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 18:52:52.430807 dbus-daemon[1480]: [system] SELinux support is enabled Jun 20 18:52:52.435611 jq[1507]: true Jun 20 18:52:52.448723 update_engine[1494]: I20250620 18:52:52.448607 1494 update_check_scheduler.cc:74] Next update check in 5m14s Jun 20 18:52:52.449952 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 18:52:52.449998 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 18:52:52.452490 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 18:52:52.452512 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 18:52:52.460283 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 18:52:52.461493 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 18:52:52.466601 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1331) Jun 20 18:52:52.469790 systemd[1]: Started update-engine.service - Update Engine. Jun 20 18:52:52.482111 tar[1504]: linux-amd64/LICENSE Jun 20 18:52:52.484595 tar[1504]: linux-amd64/helm Jun 20 18:52:52.483633 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 18:52:52.497151 systemd-logind[1491]: New seat seat0. Jun 20 18:52:52.498842 systemd-logind[1491]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 18:52:52.498855 systemd-logind[1491]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 18:52:52.500514 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 18:52:52.552489 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 20 18:52:52.558240 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 18:52:52.618704 locksmithd[1531]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 18:52:52.635313 bash[1555]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:52:52.638609 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 18:52:52.646874 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jun 20 18:52:52.649790 systemd[1]: Starting sshkeys.service... Jun 20 18:52:52.662521 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 20 18:52:52.673745 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 20 18:52:52.680119 extend-filesystems[1501]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jun 20 18:52:52.680119 extend-filesystems[1501]: old_desc_blocks = 1, new_desc_blocks = 5 Jun 20 18:52:52.680119 extend-filesystems[1501]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jun 20 18:52:52.681974 extend-filesystems[1484]: Resized filesystem in /dev/sda9 Jun 20 18:52:52.681974 extend-filesystems[1484]: Found sr0 Jun 20 18:52:52.684359 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 18:52:52.684978 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 18:52:52.729308 containerd[1508]: time="2025-06-20T18:52:52.727729791Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jun 20 18:52:52.737135 coreos-metadata[1560]: Jun 20 18:52:52.737 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jun 20 18:52:52.740119 coreos-metadata[1560]: Jun 20 18:52:52.739 INFO Fetch successful Jun 20 18:52:52.743647 unknown[1560]: wrote ssh authorized keys file for user: core Jun 20 18:52:52.753239 sshd_keygen[1523]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 18:52:52.772514 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 18:52:52.775207 containerd[1508]: time="2025-06-20T18:52:52.775173271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:52.775947 update-ssh-keys[1571]: Updated "/home/core/.ssh/authorized_keys" Jun 20 18:52:52.778585 containerd[1508]: time="2025-06-20T18:52:52.778554584Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.94-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.778654301Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.778676502Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.778808850Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.778827485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.778880234Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.778890814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.779042398Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.779054791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.779066373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.779073296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.779137527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:52.780596 containerd[1508]: time="2025-06-20T18:52:52.779294240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 20 18:52:52.780800 containerd[1508]: time="2025-06-20T18:52:52.779399708Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 20 18:52:52.780800 containerd[1508]: time="2025-06-20T18:52:52.779410979Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 20 18:52:52.780941 containerd[1508]: time="2025-06-20T18:52:52.780924789Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 20 18:52:52.781056 containerd[1508]: time="2025-06-20T18:52:52.781017653Z" level=info msg="metadata content store policy set" policy=shared Jun 20 18:52:52.783767 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 18:52:52.786962 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 20 18:52:52.789308 systemd[1]: Finished sshkeys.service. Jun 20 18:52:52.790079 containerd[1508]: time="2025-06-20T18:52:52.790055143Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 20 18:52:52.790172 containerd[1508]: time="2025-06-20T18:52:52.790160831Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 20 18:52:52.790497 containerd[1508]: time="2025-06-20T18:52:52.790255068Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 20 18:52:52.790497 containerd[1508]: time="2025-06-20T18:52:52.790274023Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 20 18:52:52.790497 containerd[1508]: time="2025-06-20T18:52:52.790285815Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 20 18:52:52.790628 containerd[1508]: time="2025-06-20T18:52:52.790584736Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 20 18:52:52.790710 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 18:52:52.790843 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 18:52:52.791617 containerd[1508]: time="2025-06-20T18:52:52.791412568Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 20 18:52:52.791617 containerd[1508]: time="2025-06-20T18:52:52.791517175Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 20 18:52:52.791617 containerd[1508]: time="2025-06-20T18:52:52.791551610Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 20 18:52:52.791617 containerd[1508]: time="2025-06-20T18:52:52.791563512Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 20 18:52:52.791617 containerd[1508]: time="2025-06-20T18:52:52.791573731Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 20 18:52:52.791617 containerd[1508]: time="2025-06-20T18:52:52.791583599Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 20 18:52:52.791617 containerd[1508]: time="2025-06-20T18:52:52.791592677Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 20 18:52:52.791825 containerd[1508]: time="2025-06-20T18:52:52.791602124Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 20 18:52:52.791825 containerd[1508]: time="2025-06-20T18:52:52.791764128Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 20 18:52:52.791825 containerd[1508]: time="2025-06-20T18:52:52.791776571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 20 18:52:52.791825 containerd[1508]: time="2025-06-20T18:52:52.791786460Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 20 18:52:52.791825 containerd[1508]: time="2025-06-20T18:52:52.791798433Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792151495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792171723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792181481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792191670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792200677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792210264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792237275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792247625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792264486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792277281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792286758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792295615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792321363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792332384Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 20 18:52:52.792392 containerd[1508]: time="2025-06-20T18:52:52.792349045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.792729 containerd[1508]: time="2025-06-20T18:52:52.792359565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.793473 containerd[1508]: time="2025-06-20T18:52:52.792376727Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 20 18:52:52.793473 containerd[1508]: time="2025-06-20T18:52:52.792879390Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 20 18:52:52.793473 containerd[1508]: time="2025-06-20T18:52:52.793348139Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 20 18:52:52.793708 containerd[1508]: time="2025-06-20T18:52:52.793357557Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 20 18:52:52.793708 containerd[1508]: time="2025-06-20T18:52:52.793557782Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 20 18:52:52.793708 containerd[1508]: time="2025-06-20T18:52:52.793570165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.793708 containerd[1508]: time="2025-06-20T18:52:52.793581306Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 20 18:52:52.793708 containerd[1508]: time="2025-06-20T18:52:52.793589401Z" level=info msg="NRI interface is disabled by configuration." Jun 20 18:52:52.793708 containerd[1508]: time="2025-06-20T18:52:52.793597477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 20 18:52:52.797970 containerd[1508]: time="2025-06-20T18:52:52.797546873Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 20 18:52:52.797970 containerd[1508]: time="2025-06-20T18:52:52.797837218Z" level=info msg="Connect containerd service" Jun 20 18:52:52.797970 containerd[1508]: time="2025-06-20T18:52:52.797909053Z" level=info msg="using legacy CRI server" Jun 20 18:52:52.797970 containerd[1508]: time="2025-06-20T18:52:52.797920805Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 18:52:52.798119 containerd[1508]: time="2025-06-20T18:52:52.798046651Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 20 18:52:52.799963 containerd[1508]: time="2025-06-20T18:52:52.799756308Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 18:52:52.800028 containerd[1508]: time="2025-06-20T18:52:52.800004102Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 18:52:52.800241 containerd[1508]: time="2025-06-20T18:52:52.800052773Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 18:52:52.800241 containerd[1508]: time="2025-06-20T18:52:52.800081337Z" level=info msg="Start subscribing containerd event" Jun 20 18:52:52.800241 containerd[1508]: time="2025-06-20T18:52:52.800124719Z" level=info msg="Start recovering state" Jun 20 18:52:52.800241 containerd[1508]: time="2025-06-20T18:52:52.800179281Z" level=info msg="Start event monitor" Jun 20 18:52:52.800241 containerd[1508]: time="2025-06-20T18:52:52.800189160Z" level=info msg="Start snapshots syncer" Jun 20 18:52:52.800241 containerd[1508]: time="2025-06-20T18:52:52.800196053Z" level=info msg="Start cni network conf syncer for default" Jun 20 18:52:52.800241 containerd[1508]: time="2025-06-20T18:52:52.800202484Z" level=info msg="Start streaming server" Jun 20 18:52:52.800655 containerd[1508]: time="2025-06-20T18:52:52.800643391Z" level=info msg="containerd successfully booted in 0.077866s" Jun 20 18:52:52.802700 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 18:52:52.803904 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 18:52:52.813642 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 18:52:52.821726 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 18:52:52.826670 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 18:52:52.827117 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 18:52:53.116514 tar[1504]: linux-amd64/README.md Jun 20 18:52:53.123413 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 18:52:53.430708 systemd-networkd[1403]: eth0: Gained IPv6LL Jun 20 18:52:53.431468 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Jun 20 18:52:53.433895 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 18:52:53.436071 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 18:52:53.443799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:52:53.447277 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 18:52:53.468679 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 18:52:54.134640 systemd-networkd[1403]: eth1: Gained IPv6LL Jun 20 18:52:54.135237 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Jun 20 18:52:54.206143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:52:54.207024 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 18:52:54.208627 systemd[1]: Startup finished in 1.129s (kernel) + 6.699s (initrd) + 3.883s (userspace) = 11.712s. Jun 20 18:52:54.209505 (kubelet)[1609]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:52:54.694161 kubelet[1609]: E0620 18:52:54.694074 1609 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:52:54.696316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:52:54.696487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:52:54.696766 systemd[1]: kubelet.service: Consumed 793ms CPU time, 268.7M memory peak. Jun 20 18:53:04.947111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 18:53:04.952787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:53:05.026197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:53:05.037658 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:53:05.074460 kubelet[1628]: E0620 18:53:05.072087 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:53:05.076710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:53:05.076834 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:53:05.077373 systemd[1]: kubelet.service: Consumed 110ms CPU time, 108.8M memory peak. Jun 20 18:53:15.327561 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 20 18:53:15.332906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:53:15.422555 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:53:15.426704 (kubelet)[1643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:53:15.454660 kubelet[1643]: E0620 18:53:15.454606 1643 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:53:15.456635 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:53:15.456751 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:53:15.456979 systemd[1]: kubelet.service: Consumed 102ms CPU time, 110.2M memory peak. Jun 20 18:53:25.360756 systemd-timesyncd[1437]: Contacted time server 130.61.133.198:123 (2.flatcar.pool.ntp.org). Jun 20 18:53:25.360825 systemd-timesyncd[1437]: Initial clock synchronization to Fri 2025-06-20 18:53:25.360560 UTC. Jun 20 18:53:25.361368 systemd-resolved[1405]: Clock change detected. Flushing caches. Jun 20 18:53:26.567239 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 20 18:53:26.580373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:53:26.659087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:53:26.661528 (kubelet)[1659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:53:26.689126 kubelet[1659]: E0620 18:53:26.689063 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:53:26.690476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:53:26.690605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:53:26.691083 systemd[1]: kubelet.service: Consumed 100ms CPU time, 108.8M memory peak. Jun 20 18:53:36.817054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jun 20 18:53:36.822437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:53:36.900330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:53:36.911434 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:53:36.941291 kubelet[1675]: E0620 18:53:36.941234 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:53:36.943568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:53:36.943686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:53:36.943919 systemd[1]: kubelet.service: Consumed 104ms CPU time, 108.1M memory peak. Jun 20 18:53:38.577004 update_engine[1494]: I20250620 18:53:38.576897 1494 update_attempter.cc:509] Updating boot flags... Jun 20 18:53:38.614243 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1692) Jun 20 18:53:38.656244 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1691) Jun 20 18:53:38.694240 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1691) Jun 20 18:53:47.067061 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jun 20 18:53:47.072631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:53:47.189910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:53:47.193122 (kubelet)[1712]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:53:47.230451 kubelet[1712]: E0620 18:53:47.230389 1712 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:53:47.233368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:53:47.233504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:53:47.233766 systemd[1]: kubelet.service: Consumed 133ms CPU time, 110.3M memory peak. Jun 20 18:53:57.317532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jun 20 18:53:57.323548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:53:57.434806 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:53:57.443386 (kubelet)[1727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:53:57.473567 kubelet[1727]: E0620 18:53:57.473510 1727 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:53:57.475463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:53:57.475605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:53:57.475870 systemd[1]: kubelet.service: Consumed 124ms CPU time, 110.7M memory peak. Jun 20 18:54:07.567568 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jun 20 18:54:07.582516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:07.692594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:07.694976 (kubelet)[1742]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:54:07.722310 kubelet[1742]: E0620 18:54:07.722213 1742 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:54:07.724403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:54:07.724520 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:54:07.724754 systemd[1]: kubelet.service: Consumed 128ms CPU time, 109.6M memory peak. Jun 20 18:54:17.817109 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jun 20 18:54:17.822348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:17.927358 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:17.929756 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:54:17.960401 kubelet[1758]: E0620 18:54:17.960365 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:54:17.962372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:54:17.962561 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:54:17.963016 systemd[1]: kubelet.service: Consumed 119ms CPU time, 111.8M memory peak. Jun 20 18:54:28.067511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jun 20 18:54:28.074437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:28.158732 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:28.161072 (kubelet)[1774]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:54:28.189722 kubelet[1774]: E0620 18:54:28.189668 1774 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:54:28.192027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:54:28.192218 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:54:28.192468 systemd[1]: kubelet.service: Consumed 106ms CPU time, 112.3M memory peak. Jun 20 18:54:38.317187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jun 20 18:54:38.324398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:38.404004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:38.412746 (kubelet)[1790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:54:38.444097 kubelet[1790]: E0620 18:54:38.444013 1790 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:54:38.446038 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:54:38.446193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:54:38.446526 systemd[1]: kubelet.service: Consumed 110ms CPU time, 112M memory peak. Jun 20 18:54:43.269281 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 18:54:43.274545 systemd[1]: Started sshd@0-46.62.134.149:22-139.178.68.195:38198.service - OpenSSH per-connection server daemon (139.178.68.195:38198). Jun 20 18:54:44.282863 sshd[1799]: Accepted publickey for core from 139.178.68.195 port 38198 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:54:44.285886 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:44.296271 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 18:54:44.302425 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 18:54:44.309455 systemd-logind[1491]: New session 1 of user core. Jun 20 18:54:44.315320 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 18:54:44.322619 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 18:54:44.326862 (systemd)[1803]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 18:54:44.330465 systemd-logind[1491]: New session c1 of user core. Jun 20 18:54:44.449278 systemd[1803]: Queued start job for default target default.target. Jun 20 18:54:44.460066 systemd[1803]: Created slice app.slice - User Application Slice. Jun 20 18:54:44.460093 systemd[1803]: Reached target paths.target - Paths. Jun 20 18:54:44.460229 systemd[1803]: Reached target timers.target - Timers. Jun 20 18:54:44.461455 systemd[1803]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 18:54:44.478108 systemd[1803]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 18:54:44.478449 systemd[1803]: Reached target sockets.target - Sockets. Jun 20 18:54:44.478547 systemd[1803]: Reached target basic.target - Basic System. Jun 20 18:54:44.478615 systemd[1803]: Reached target default.target - Main User Target. Jun 20 18:54:44.478656 systemd[1803]: Startup finished in 141ms. Jun 20 18:54:44.478659 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 18:54:44.488346 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 18:54:45.187117 systemd[1]: Started sshd@1-46.62.134.149:22-139.178.68.195:43844.service - OpenSSH per-connection server daemon (139.178.68.195:43844). Jun 20 18:54:46.160621 sshd[1815]: Accepted publickey for core from 139.178.68.195 port 43844 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:54:46.162803 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:46.170739 systemd-logind[1491]: New session 2 of user core. Jun 20 18:54:46.180427 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 18:54:46.833326 sshd[1817]: Connection closed by 139.178.68.195 port 43844 Jun 20 18:54:46.833967 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:46.836614 systemd[1]: sshd@1-46.62.134.149:22-139.178.68.195:43844.service: Deactivated successfully. Jun 20 18:54:46.838436 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 18:54:46.839323 systemd-logind[1491]: Session 2 logged out. Waiting for processes to exit. Jun 20 18:54:46.840456 systemd-logind[1491]: Removed session 2. Jun 20 18:54:47.005420 systemd[1]: Started sshd@2-46.62.134.149:22-139.178.68.195:43860.service - OpenSSH per-connection server daemon (139.178.68.195:43860). Jun 20 18:54:47.972309 sshd[1823]: Accepted publickey for core from 139.178.68.195 port 43860 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:54:47.973523 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:47.978290 systemd-logind[1491]: New session 3 of user core. Jun 20 18:54:47.987450 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 18:54:48.567047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jun 20 18:54:48.573584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:48.642319 sshd[1825]: Connection closed by 139.178.68.195 port 43860 Jun 20 18:54:48.642831 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:48.653814 systemd[1]: sshd@2-46.62.134.149:22-139.178.68.195:43860.service: Deactivated successfully. Jun 20 18:54:48.656331 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 18:54:48.657294 systemd-logind[1491]: Session 3 logged out. Waiting for processes to exit. Jun 20 18:54:48.660099 systemd-logind[1491]: Removed session 3. Jun 20 18:54:48.669798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:48.672844 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:54:48.703665 kubelet[1838]: E0620 18:54:48.703570 1838 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:54:48.705019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:54:48.705160 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:54:48.705630 systemd[1]: kubelet.service: Consumed 111ms CPU time, 112.3M memory peak. Jun 20 18:54:48.811678 systemd[1]: Started sshd@3-46.62.134.149:22-139.178.68.195:43874.service - OpenSSH per-connection server daemon (139.178.68.195:43874). Jun 20 18:54:49.776667 sshd[1847]: Accepted publickey for core from 139.178.68.195 port 43874 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:54:49.777848 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:49.782461 systemd-logind[1491]: New session 4 of user core. Jun 20 18:54:49.788539 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 18:54:50.448310 sshd[1849]: Connection closed by 139.178.68.195 port 43874 Jun 20 18:54:50.448881 sshd-session[1847]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:50.451190 systemd[1]: sshd@3-46.62.134.149:22-139.178.68.195:43874.service: Deactivated successfully. Jun 20 18:54:50.453104 systemd-logind[1491]: Session 4 logged out. Waiting for processes to exit. Jun 20 18:54:50.453507 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 18:54:50.454514 systemd-logind[1491]: Removed session 4. Jun 20 18:54:50.620439 systemd[1]: Started sshd@4-46.62.134.149:22-139.178.68.195:43890.service - OpenSSH per-connection server daemon (139.178.68.195:43890). Jun 20 18:54:51.587512 sshd[1855]: Accepted publickey for core from 139.178.68.195 port 43890 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:54:51.588684 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:51.592610 systemd-logind[1491]: New session 5 of user core. Jun 20 18:54:51.602360 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 18:54:52.112277 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 18:54:52.112531 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:54:52.133826 sudo[1858]: pam_unix(sudo:session): session closed for user root Jun 20 18:54:52.291563 sshd[1857]: Connection closed by 139.178.68.195 port 43890 Jun 20 18:54:52.292279 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:52.295157 systemd[1]: sshd@4-46.62.134.149:22-139.178.68.195:43890.service: Deactivated successfully. Jun 20 18:54:52.296727 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 18:54:52.297869 systemd-logind[1491]: Session 5 logged out. Waiting for processes to exit. Jun 20 18:54:52.298917 systemd-logind[1491]: Removed session 5. Jun 20 18:54:52.463471 systemd[1]: Started sshd@5-46.62.134.149:22-139.178.68.195:43898.service - OpenSSH per-connection server daemon (139.178.68.195:43898). Jun 20 18:54:53.434019 sshd[1864]: Accepted publickey for core from 139.178.68.195 port 43898 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:54:53.435335 sshd-session[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:53.439550 systemd-logind[1491]: New session 6 of user core. Jun 20 18:54:53.446325 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 18:54:53.952650 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 18:54:53.953001 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:54:53.956077 sudo[1868]: pam_unix(sudo:session): session closed for user root Jun 20 18:54:53.963945 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 18:54:53.964445 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:54:53.988499 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 18:54:54.026239 augenrules[1890]: No rules Jun 20 18:54:54.028075 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 18:54:54.028343 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 18:54:54.030072 sudo[1867]: pam_unix(sudo:session): session closed for user root Jun 20 18:54:54.188478 sshd[1866]: Connection closed by 139.178.68.195 port 43898 Jun 20 18:54:54.189469 sshd-session[1864]: pam_unix(sshd:session): session closed for user core Jun 20 18:54:54.194853 systemd-logind[1491]: Session 6 logged out. Waiting for processes to exit. Jun 20 18:54:54.195253 systemd[1]: sshd@5-46.62.134.149:22-139.178.68.195:43898.service: Deactivated successfully. Jun 20 18:54:54.198034 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 18:54:54.199844 systemd-logind[1491]: Removed session 6. Jun 20 18:54:54.359511 systemd[1]: Started sshd@6-46.62.134.149:22-139.178.68.195:53236.service - OpenSSH per-connection server daemon (139.178.68.195:53236). Jun 20 18:54:55.336383 sshd[1899]: Accepted publickey for core from 139.178.68.195 port 53236 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:54:55.338313 sshd-session[1899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:54:55.346069 systemd-logind[1491]: New session 7 of user core. Jun 20 18:54:55.361438 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 18:54:55.852973 sudo[1902]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 18:54:55.853251 sudo[1902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 18:54:56.107285 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 18:54:56.108165 (dockerd)[1920]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 18:54:56.332623 dockerd[1920]: time="2025-06-20T18:54:56.332334440Z" level=info msg="Starting up" Jun 20 18:54:56.412094 dockerd[1920]: time="2025-06-20T18:54:56.411661206Z" level=info msg="Loading containers: start." Jun 20 18:54:56.533237 kernel: Initializing XFRM netlink socket Jun 20 18:54:56.597836 systemd-networkd[1403]: docker0: Link UP Jun 20 18:54:56.625120 dockerd[1920]: time="2025-06-20T18:54:56.625074868Z" level=info msg="Loading containers: done." Jun 20 18:54:56.636800 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2393980228-merged.mount: Deactivated successfully. Jun 20 18:54:56.640013 dockerd[1920]: time="2025-06-20T18:54:56.639977969Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 18:54:56.640092 dockerd[1920]: time="2025-06-20T18:54:56.640052938Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jun 20 18:54:56.640151 dockerd[1920]: time="2025-06-20T18:54:56.640129882Z" level=info msg="Daemon has completed initialization" Jun 20 18:54:56.665887 dockerd[1920]: time="2025-06-20T18:54:56.665580897Z" level=info msg="API listen on /run/docker.sock" Jun 20 18:54:56.666196 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 18:54:57.378015 containerd[1508]: time="2025-06-20T18:54:57.377852600Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jun 20 18:54:57.948046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2783923770.mount: Deactivated successfully. Jun 20 18:54:58.817060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jun 20 18:54:58.826331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:54:58.906356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:54:58.909440 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 18:54:58.943926 kubelet[2170]: E0620 18:54:58.943881 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 18:54:58.945524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 18:54:58.945629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 18:54:58.945872 systemd[1]: kubelet.service: Consumed 94ms CPU time, 108.2M memory peak. Jun 20 18:54:58.950579 containerd[1508]: time="2025-06-20T18:54:58.950538282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:58.951482 containerd[1508]: time="2025-06-20T18:54:58.951450528Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079193" Jun 20 18:54:58.952716 containerd[1508]: time="2025-06-20T18:54:58.952673446Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:58.955346 containerd[1508]: time="2025-06-20T18:54:58.955312563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:54:58.956188 containerd[1508]: time="2025-06-20T18:54:58.956029505Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 1.578138693s" Jun 20 18:54:58.956188 containerd[1508]: time="2025-06-20T18:54:58.956055954Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jun 20 18:54:58.956880 containerd[1508]: time="2025-06-20T18:54:58.956857233Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jun 20 18:55:00.065531 containerd[1508]: time="2025-06-20T18:55:00.065479001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:00.066688 containerd[1508]: time="2025-06-20T18:55:00.066616792Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018968" Jun 20 18:55:00.067295 containerd[1508]: time="2025-06-20T18:55:00.067257951Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:00.069433 containerd[1508]: time="2025-06-20T18:55:00.069391183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:00.071003 containerd[1508]: time="2025-06-20T18:55:00.070972261Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.114092507s" Jun 20 18:55:00.071003 containerd[1508]: time="2025-06-20T18:55:00.070999482Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jun 20 18:55:00.071446 containerd[1508]: time="2025-06-20T18:55:00.071423115Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jun 20 18:55:01.032471 containerd[1508]: time="2025-06-20T18:55:01.032401182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:01.033438 containerd[1508]: time="2025-06-20T18:55:01.033405962Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155077" Jun 20 18:55:01.034084 containerd[1508]: time="2025-06-20T18:55:01.034067460Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:01.036441 containerd[1508]: time="2025-06-20T18:55:01.036417258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:01.037301 containerd[1508]: time="2025-06-20T18:55:01.037283629Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 965.837821ms" Jun 20 18:55:01.037338 containerd[1508]: time="2025-06-20T18:55:01.037306541Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jun 20 18:55:01.038024 containerd[1508]: time="2025-06-20T18:55:01.038004439Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jun 20 18:55:01.997984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29702407.mount: Deactivated successfully. Jun 20 18:55:02.267426 containerd[1508]: time="2025-06-20T18:55:02.267310599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:02.268034 containerd[1508]: time="2025-06-20T18:55:02.267963552Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892774" Jun 20 18:55:02.268622 containerd[1508]: time="2025-06-20T18:55:02.268591586Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:02.270281 containerd[1508]: time="2025-06-20T18:55:02.270252598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:02.270938 containerd[1508]: time="2025-06-20T18:55:02.270826001Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 1.232800112s" Jun 20 18:55:02.270938 containerd[1508]: time="2025-06-20T18:55:02.270858306Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jun 20 18:55:02.271479 containerd[1508]: time="2025-06-20T18:55:02.271335987Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jun 20 18:55:02.781989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1410342040.mount: Deactivated successfully. Jun 20 18:55:03.656009 containerd[1508]: time="2025-06-20T18:55:03.655963342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:03.656914 containerd[1508]: time="2025-06-20T18:55:03.656882608Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Jun 20 18:55:03.657753 containerd[1508]: time="2025-06-20T18:55:03.657704599Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:03.662645 containerd[1508]: time="2025-06-20T18:55:03.662348334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:03.663228 containerd[1508]: time="2025-06-20T18:55:03.663189374Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.391629696s" Jun 20 18:55:03.663295 containerd[1508]: time="2025-06-20T18:55:03.663282049Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jun 20 18:55:03.663852 containerd[1508]: time="2025-06-20T18:55:03.663738265Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 18:55:04.136733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2847523984.mount: Deactivated successfully. Jun 20 18:55:04.145853 containerd[1508]: time="2025-06-20T18:55:04.145737777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:04.147106 containerd[1508]: time="2025-06-20T18:55:04.147029105Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jun 20 18:55:04.148596 containerd[1508]: time="2025-06-20T18:55:04.148522408Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:04.152172 containerd[1508]: time="2025-06-20T18:55:04.152070470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:04.153505 containerd[1508]: time="2025-06-20T18:55:04.153333131Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 489.376017ms" Jun 20 18:55:04.153505 containerd[1508]: time="2025-06-20T18:55:04.153379534Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 18:55:04.154538 containerd[1508]: time="2025-06-20T18:55:04.154485320Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jun 20 18:55:04.591865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3785720927.mount: Deactivated successfully. Jun 20 18:55:05.846881 containerd[1508]: time="2025-06-20T18:55:05.846832122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:05.847854 containerd[1508]: time="2025-06-20T18:55:05.847811623Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247215" Jun 20 18:55:05.848590 containerd[1508]: time="2025-06-20T18:55:05.848545170Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:05.850915 containerd[1508]: time="2025-06-20T18:55:05.850876866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:05.851940 containerd[1508]: time="2025-06-20T18:55:05.851825965Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.697297338s" Jun 20 18:55:05.851940 containerd[1508]: time="2025-06-20T18:55:05.851852167Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jun 20 18:55:08.600980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:55:08.601196 systemd[1]: kubelet.service: Consumed 94ms CPU time, 108.2M memory peak. Jun 20 18:55:08.607379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:55:08.631108 systemd[1]: Reload requested from client PID 2332 ('systemctl') (unit session-7.scope)... Jun 20 18:55:08.631122 systemd[1]: Reloading... Jun 20 18:55:08.707227 zram_generator::config[2373]: No configuration found. Jun 20 18:55:08.797625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:55:08.879487 systemd[1]: Reloading finished in 248 ms. Jun 20 18:55:08.921466 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:55:08.926015 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:55:08.926350 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:55:08.927294 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:55:08.927463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:55:08.927514 systemd[1]: kubelet.service: Consumed 66ms CPU time, 97.8M memory peak. Jun 20 18:55:08.928770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:55:09.013909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:55:09.022398 (kubelet)[2434]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:55:09.049692 kubelet[2434]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:55:09.049692 kubelet[2434]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:55:09.049692 kubelet[2434]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:55:09.051665 kubelet[2434]: I0620 18:55:09.051437 2434 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:55:09.713224 kubelet[2434]: I0620 18:55:09.712669 2434 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:55:09.713224 kubelet[2434]: I0620 18:55:09.712703 2434 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:55:09.713224 kubelet[2434]: I0620 18:55:09.713010 2434 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:55:09.741873 kubelet[2434]: I0620 18:55:09.741796 2434 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:55:09.745576 kubelet[2434]: E0620 18:55:09.745534 2434 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.62.134.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.134.149:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jun 20 18:55:09.755123 kubelet[2434]: E0620 18:55:09.755101 2434 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:55:09.755123 kubelet[2434]: I0620 18:55:09.755120 2434 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:55:09.760114 kubelet[2434]: I0620 18:55:09.760099 2434 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:55:09.762538 kubelet[2434]: I0620 18:55:09.762497 2434 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:55:09.765165 kubelet[2434]: I0620 18:55:09.762526 2434 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-5-00d7cf22d6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:55:09.765165 kubelet[2434]: I0620 18:55:09.765156 2434 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:55:09.765165 kubelet[2434]: I0620 18:55:09.765164 2434 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:55:09.765996 kubelet[2434]: I0620 18:55:09.765965 2434 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:55:09.767669 kubelet[2434]: I0620 18:55:09.767647 2434 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:55:09.767669 kubelet[2434]: I0620 18:55:09.767663 2434 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:55:09.767833 kubelet[2434]: I0620 18:55:09.767696 2434 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:55:09.769043 kubelet[2434]: I0620 18:55:09.768928 2434 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:55:09.777059 kubelet[2434]: E0620 18:55:09.776805 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.134.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-5-00d7cf22d6&limit=500&resourceVersion=0\": dial tcp 46.62.134.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:55:09.781765 kubelet[2434]: E0620 18:55:09.781731 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.134.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.134.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:55:09.781833 kubelet[2434]: I0620 18:55:09.781815 2434 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:55:09.782255 kubelet[2434]: I0620 18:55:09.782236 2434 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:55:09.782923 kubelet[2434]: W0620 18:55:09.782900 2434 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 18:55:09.788497 kubelet[2434]: I0620 18:55:09.788468 2434 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:55:09.788604 kubelet[2434]: I0620 18:55:09.788580 2434 server.go:1289] "Started kubelet" Jun 20 18:55:09.790677 kubelet[2434]: I0620 18:55:09.790647 2434 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:55:09.795519 kubelet[2434]: E0620 18:55:09.791909 2434 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.134.149:6443/api/v1/namespaces/default/events\": dial tcp 46.62.134.149:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-2-0-5-00d7cf22d6.184ad5194a8db7bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-2-0-5-00d7cf22d6,UID:ci-4230-2-0-5-00d7cf22d6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-2-0-5-00d7cf22d6,},FirstTimestamp:2025-06-20 18:55:09.788559293 +0000 UTC m=+0.763549164,LastTimestamp:2025-06-20 18:55:09.788559293 +0000 UTC m=+0.763549164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-2-0-5-00d7cf22d6,}" Jun 20 18:55:09.796655 kubelet[2434]: I0620 18:55:09.796132 2434 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:55:09.798171 kubelet[2434]: I0620 18:55:09.798061 2434 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:55:09.803734 kubelet[2434]: I0620 18:55:09.798281 2434 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:55:09.803971 kubelet[2434]: I0620 18:55:09.803930 2434 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:55:09.804177 kubelet[2434]: I0620 18:55:09.804166 2434 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:55:09.804328 kubelet[2434]: E0620 18:55:09.798413 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-5-00d7cf22d6\" not found" Jun 20 18:55:09.804551 kubelet[2434]: I0620 18:55:09.804537 2434 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:55:09.805760 kubelet[2434]: I0620 18:55:09.798293 2434 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:55:09.805851 kubelet[2434]: I0620 18:55:09.805843 2434 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:55:09.806483 kubelet[2434]: E0620 18:55:09.806459 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.134.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-5-00d7cf22d6?timeout=10s\": dial tcp 46.62.134.149:6443: connect: connection refused" interval="200ms" Jun 20 18:55:09.806741 kubelet[2434]: I0620 18:55:09.806728 2434 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:55:09.806849 kubelet[2434]: I0620 18:55:09.806835 2434 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:55:09.809140 kubelet[2434]: I0620 18:55:09.809114 2434 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:55:09.809430 kubelet[2434]: E0620 18:55:09.809406 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.134.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.134.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:55:09.810050 kubelet[2434]: I0620 18:55:09.810037 2434 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:55:09.810112 kubelet[2434]: I0620 18:55:09.810104 2434 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:55:09.810161 kubelet[2434]: I0620 18:55:09.810154 2434 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:55:09.810225 kubelet[2434]: I0620 18:55:09.810196 2434 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:55:09.810322 kubelet[2434]: E0620 18:55:09.810309 2434 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:55:09.811123 kubelet[2434]: I0620 18:55:09.811101 2434 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:55:09.815107 kubelet[2434]: E0620 18:55:09.815087 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.134.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.134.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:55:09.815281 kubelet[2434]: E0620 18:55:09.815211 2434 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:55:09.840480 kubelet[2434]: I0620 18:55:09.840452 2434 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:55:09.840480 kubelet[2434]: I0620 18:55:09.840467 2434 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:55:09.840594 kubelet[2434]: I0620 18:55:09.840491 2434 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:55:09.842518 kubelet[2434]: I0620 18:55:09.842498 2434 policy_none.go:49] "None policy: Start" Jun 20 18:55:09.842518 kubelet[2434]: I0620 18:55:09.842515 2434 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:55:09.842588 kubelet[2434]: I0620 18:55:09.842526 2434 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:55:09.850005 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 18:55:09.857216 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 18:55:09.860184 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 18:55:09.871881 kubelet[2434]: E0620 18:55:09.871850 2434 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:55:09.872005 kubelet[2434]: I0620 18:55:09.871990 2434 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:55:09.872271 kubelet[2434]: I0620 18:55:09.872006 2434 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:55:09.872271 kubelet[2434]: I0620 18:55:09.872170 2434 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:55:09.872911 kubelet[2434]: E0620 18:55:09.872893 2434 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:55:09.872949 kubelet[2434]: E0620 18:55:09.872925 2434 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-2-0-5-00d7cf22d6\" not found" Jun 20 18:55:09.954286 systemd[1]: Created slice kubepods-burstable-pod6e1d317b92878c4cd4c95b4aa003c9d5.slice - libcontainer container kubepods-burstable-pod6e1d317b92878c4cd4c95b4aa003c9d5.slice. Jun 20 18:55:09.965081 kubelet[2434]: E0620 18:55:09.964769 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-00d7cf22d6\" not found" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:09.972478 systemd[1]: Created slice kubepods-burstable-podfce90edc3027283ed06f70c908cf7996.slice - libcontainer container kubepods-burstable-podfce90edc3027283ed06f70c908cf7996.slice. Jun 20 18:55:09.975106 kubelet[2434]: I0620 18:55:09.974911 2434 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:09.977237 kubelet[2434]: E0620 18:55:09.976138 2434 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.134.149:6443/api/v1/nodes\": dial tcp 46.62.134.149:6443: connect: connection refused" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:09.981570 kubelet[2434]: E0620 18:55:09.981537 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-00d7cf22d6\" not found" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:09.984462 systemd[1]: Created slice kubepods-burstable-podb846a466010712fecae5dbe23d418a55.slice - libcontainer container kubepods-burstable-podb846a466010712fecae5dbe23d418a55.slice. Jun 20 18:55:09.986801 kubelet[2434]: E0620 18:55:09.986780 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-00d7cf22d6\" not found" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.007125 kubelet[2434]: I0620 18:55:10.007087 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fce90edc3027283ed06f70c908cf7996-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-5-00d7cf22d6\" (UID: \"fce90edc3027283ed06f70c908cf7996\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.007261 kubelet[2434]: I0620 18:55:10.007138 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fce90edc3027283ed06f70c908cf7996-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-5-00d7cf22d6\" (UID: \"fce90edc3027283ed06f70c908cf7996\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.007261 kubelet[2434]: I0620 18:55:10.007171 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fce90edc3027283ed06f70c908cf7996-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-5-00d7cf22d6\" (UID: \"fce90edc3027283ed06f70c908cf7996\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.007261 kubelet[2434]: I0620 18:55:10.007224 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b846a466010712fecae5dbe23d418a55-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-5-00d7cf22d6\" (UID: \"b846a466010712fecae5dbe23d418a55\") " pod="kube-system/kube-scheduler-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.007261 kubelet[2434]: I0620 18:55:10.007251 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e1d317b92878c4cd4c95b4aa003c9d5-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-5-00d7cf22d6\" (UID: \"6e1d317b92878c4cd4c95b4aa003c9d5\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.007419 kubelet[2434]: I0620 18:55:10.007274 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e1d317b92878c4cd4c95b4aa003c9d5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-5-00d7cf22d6\" (UID: \"6e1d317b92878c4cd4c95b4aa003c9d5\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.007419 kubelet[2434]: I0620 18:55:10.007295 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fce90edc3027283ed06f70c908cf7996-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-5-00d7cf22d6\" (UID: \"fce90edc3027283ed06f70c908cf7996\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.007419 kubelet[2434]: I0620 18:55:10.007316 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fce90edc3027283ed06f70c908cf7996-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-5-00d7cf22d6\" (UID: \"fce90edc3027283ed06f70c908cf7996\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.007419 kubelet[2434]: I0620 18:55:10.007337 2434 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e1d317b92878c4cd4c95b4aa003c9d5-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-5-00d7cf22d6\" (UID: \"6e1d317b92878c4cd4c95b4aa003c9d5\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.007668 kubelet[2434]: E0620 18:55:10.007614 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.134.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-5-00d7cf22d6?timeout=10s\": dial tcp 46.62.134.149:6443: connect: connection refused" interval="400ms" Jun 20 18:55:10.179527 kubelet[2434]: I0620 18:55:10.179448 2434 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.180096 kubelet[2434]: E0620 18:55:10.179856 2434 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.134.149:6443/api/v1/nodes\": dial tcp 46.62.134.149:6443: connect: connection refused" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.267952 containerd[1508]: time="2025-06-20T18:55:10.267751193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-5-00d7cf22d6,Uid:6e1d317b92878c4cd4c95b4aa003c9d5,Namespace:kube-system,Attempt:0,}" Jun 20 18:55:10.284484 containerd[1508]: time="2025-06-20T18:55:10.284345258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-5-00d7cf22d6,Uid:fce90edc3027283ed06f70c908cf7996,Namespace:kube-system,Attempt:0,}" Jun 20 18:55:10.288391 containerd[1508]: time="2025-06-20T18:55:10.288335708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-5-00d7cf22d6,Uid:b846a466010712fecae5dbe23d418a55,Namespace:kube-system,Attempt:0,}" Jun 20 18:55:10.408868 kubelet[2434]: E0620 18:55:10.408773 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.134.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-5-00d7cf22d6?timeout=10s\": dial tcp 46.62.134.149:6443: connect: connection refused" interval="800ms" Jun 20 18:55:10.581898 kubelet[2434]: I0620 18:55:10.581835 2434 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.582380 kubelet[2434]: E0620 18:55:10.582311 2434 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.134.149:6443/api/v1/nodes\": dial tcp 46.62.134.149:6443: connect: connection refused" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:10.623132 kubelet[2434]: E0620 18:55:10.623053 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.134.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.134.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jun 20 18:55:10.749640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2624774825.mount: Deactivated successfully. Jun 20 18:55:10.757115 containerd[1508]: time="2025-06-20T18:55:10.757057509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:55:10.759290 containerd[1508]: time="2025-06-20T18:55:10.759239340Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:55:10.761435 containerd[1508]: time="2025-06-20T18:55:10.761367053Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Jun 20 18:55:10.762410 containerd[1508]: time="2025-06-20T18:55:10.762351086Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:55:10.763640 containerd[1508]: time="2025-06-20T18:55:10.763580816Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:55:10.765328 containerd[1508]: time="2025-06-20T18:55:10.765081786Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 20 18:55:10.765328 containerd[1508]: time="2025-06-20T18:55:10.765229029Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:55:10.767394 containerd[1508]: time="2025-06-20T18:55:10.767358846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.374831ms" Jun 20 18:55:10.769232 containerd[1508]: time="2025-06-20T18:55:10.768181137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 18:55:10.770566 containerd[1508]: time="2025-06-20T18:55:10.770542714Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 486.097569ms" Jun 20 18:55:10.771692 containerd[1508]: time="2025-06-20T18:55:10.771667056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 483.249076ms" Jun 20 18:55:10.869945 containerd[1508]: time="2025-06-20T18:55:10.869715319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:55:10.869945 containerd[1508]: time="2025-06-20T18:55:10.869759407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:55:10.869945 containerd[1508]: time="2025-06-20T18:55:10.869768545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:10.869945 containerd[1508]: time="2025-06-20T18:55:10.869835337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:10.871833 containerd[1508]: time="2025-06-20T18:55:10.871605832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:55:10.871833 containerd[1508]: time="2025-06-20T18:55:10.871649609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:55:10.871833 containerd[1508]: time="2025-06-20T18:55:10.871662714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:10.871833 containerd[1508]: time="2025-06-20T18:55:10.871725659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:10.872846 containerd[1508]: time="2025-06-20T18:55:10.872776365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:55:10.873182 containerd[1508]: time="2025-06-20T18:55:10.873115508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:55:10.873182 containerd[1508]: time="2025-06-20T18:55:10.873133714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:10.873534 containerd[1508]: time="2025-06-20T18:55:10.873467997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:10.894029 systemd[1]: Started cri-containerd-70e2c538b0e04cfa64899d098e13ed374e7e56c5a6a726d0d0b3ad2001ee3e03.scope - libcontainer container 70e2c538b0e04cfa64899d098e13ed374e7e56c5a6a726d0d0b3ad2001ee3e03. Jun 20 18:55:10.895195 kubelet[2434]: E0620 18:55:10.894921 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.134.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-2-0-5-00d7cf22d6&limit=500&resourceVersion=0\": dial tcp 46.62.134.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jun 20 18:55:10.899356 systemd[1]: Started cri-containerd-340a2bd09d34b7d1faf3430bf15cf6aaaefd9479c4cbb01f90c174686ce2900b.scope - libcontainer container 340a2bd09d34b7d1faf3430bf15cf6aaaefd9479c4cbb01f90c174686ce2900b. Jun 20 18:55:10.901653 systemd[1]: Started cri-containerd-c7d89e14b0640e778223d8ee92f0b9f7f5e80627a803c0ea80571ccb347618a6.scope - libcontainer container c7d89e14b0640e778223d8ee92f0b9f7f5e80627a803c0ea80571ccb347618a6. Jun 20 18:55:10.950840 containerd[1508]: time="2025-06-20T18:55:10.950685436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-2-0-5-00d7cf22d6,Uid:fce90edc3027283ed06f70c908cf7996,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7d89e14b0640e778223d8ee92f0b9f7f5e80627a803c0ea80571ccb347618a6\"" Jun 20 18:55:10.955884 containerd[1508]: time="2025-06-20T18:55:10.955672023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-2-0-5-00d7cf22d6,Uid:6e1d317b92878c4cd4c95b4aa003c9d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"70e2c538b0e04cfa64899d098e13ed374e7e56c5a6a726d0d0b3ad2001ee3e03\"" Jun 20 18:55:10.959437 containerd[1508]: time="2025-06-20T18:55:10.959411795Z" level=info msg="CreateContainer within sandbox \"c7d89e14b0640e778223d8ee92f0b9f7f5e80627a803c0ea80571ccb347618a6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 18:55:10.973264 containerd[1508]: time="2025-06-20T18:55:10.973229480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-2-0-5-00d7cf22d6,Uid:b846a466010712fecae5dbe23d418a55,Namespace:kube-system,Attempt:0,} returns sandbox id \"340a2bd09d34b7d1faf3430bf15cf6aaaefd9479c4cbb01f90c174686ce2900b\"" Jun 20 18:55:10.975755 containerd[1508]: time="2025-06-20T18:55:10.975673922Z" level=info msg="CreateContainer within sandbox \"70e2c538b0e04cfa64899d098e13ed374e7e56c5a6a726d0d0b3ad2001ee3e03\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 18:55:10.976642 containerd[1508]: time="2025-06-20T18:55:10.976391647Z" level=info msg="CreateContainer within sandbox \"c7d89e14b0640e778223d8ee92f0b9f7f5e80627a803c0ea80571ccb347618a6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"40aecf6decdf0ba15646119ea8b95fddb6492aed4d4490cbdb67f48e0e66c444\"" Jun 20 18:55:10.977007 containerd[1508]: time="2025-06-20T18:55:10.976991206Z" level=info msg="StartContainer for \"40aecf6decdf0ba15646119ea8b95fddb6492aed4d4490cbdb67f48e0e66c444\"" Jun 20 18:55:10.977494 containerd[1508]: time="2025-06-20T18:55:10.977392794Z" level=info msg="CreateContainer within sandbox \"340a2bd09d34b7d1faf3430bf15cf6aaaefd9479c4cbb01f90c174686ce2900b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 18:55:10.989859 containerd[1508]: time="2025-06-20T18:55:10.989564241Z" level=info msg="CreateContainer within sandbox \"70e2c538b0e04cfa64899d098e13ed374e7e56c5a6a726d0d0b3ad2001ee3e03\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"920236a3307bb5a8d0448d89068431802ed3708c9fbb32a2117d1941a314732a\"" Jun 20 18:55:10.991183 containerd[1508]: time="2025-06-20T18:55:10.990435691Z" level=info msg="StartContainer for \"920236a3307bb5a8d0448d89068431802ed3708c9fbb32a2117d1941a314732a\"" Jun 20 18:55:10.996747 containerd[1508]: time="2025-06-20T18:55:10.996724001Z" level=info msg="CreateContainer within sandbox \"340a2bd09d34b7d1faf3430bf15cf6aaaefd9479c4cbb01f90c174686ce2900b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e0240c9a685c29185f96f9bcc73e52613f9f9ddf4b0311822a7569aaec3234e\"" Jun 20 18:55:10.997148 containerd[1508]: time="2025-06-20T18:55:10.997126560Z" level=info msg="StartContainer for \"9e0240c9a685c29185f96f9bcc73e52613f9f9ddf4b0311822a7569aaec3234e\"" Jun 20 18:55:10.997960 systemd[1]: Started cri-containerd-40aecf6decdf0ba15646119ea8b95fddb6492aed4d4490cbdb67f48e0e66c444.scope - libcontainer container 40aecf6decdf0ba15646119ea8b95fddb6492aed4d4490cbdb67f48e0e66c444. Jun 20 18:55:11.021330 systemd[1]: Started cri-containerd-920236a3307bb5a8d0448d89068431802ed3708c9fbb32a2117d1941a314732a.scope - libcontainer container 920236a3307bb5a8d0448d89068431802ed3708c9fbb32a2117d1941a314732a. Jun 20 18:55:11.032807 systemd[1]: Started cri-containerd-9e0240c9a685c29185f96f9bcc73e52613f9f9ddf4b0311822a7569aaec3234e.scope - libcontainer container 9e0240c9a685c29185f96f9bcc73e52613f9f9ddf4b0311822a7569aaec3234e. Jun 20 18:55:11.053406 containerd[1508]: time="2025-06-20T18:55:11.053370435Z" level=info msg="StartContainer for \"40aecf6decdf0ba15646119ea8b95fddb6492aed4d4490cbdb67f48e0e66c444\" returns successfully" Jun 20 18:55:11.071952 containerd[1508]: time="2025-06-20T18:55:11.071917422Z" level=info msg="StartContainer for \"920236a3307bb5a8d0448d89068431802ed3708c9fbb32a2117d1941a314732a\" returns successfully" Jun 20 18:55:11.090475 containerd[1508]: time="2025-06-20T18:55:11.090420130Z" level=info msg="StartContainer for \"9e0240c9a685c29185f96f9bcc73e52613f9f9ddf4b0311822a7569aaec3234e\" returns successfully" Jun 20 18:55:11.209464 kubelet[2434]: E0620 18:55:11.209361 2434 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.134.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-2-0-5-00d7cf22d6?timeout=10s\": dial tcp 46.62.134.149:6443: connect: connection refused" interval="1.6s" Jun 20 18:55:11.332683 kubelet[2434]: E0620 18:55:11.332106 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.134.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.134.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jun 20 18:55:11.348825 kubelet[2434]: E0620 18:55:11.348787 2434 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.134.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.134.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jun 20 18:55:11.385182 kubelet[2434]: I0620 18:55:11.384911 2434 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:11.846424 kubelet[2434]: E0620 18:55:11.846368 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-00d7cf22d6\" not found" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:11.848301 kubelet[2434]: E0620 18:55:11.848117 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-00d7cf22d6\" not found" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:11.849839 kubelet[2434]: E0620 18:55:11.849671 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-00d7cf22d6\" not found" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:12.851963 kubelet[2434]: E0620 18:55:12.851938 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-00d7cf22d6\" not found" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:12.852594 kubelet[2434]: E0620 18:55:12.852525 2434 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-2-0-5-00d7cf22d6\" not found" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:12.872283 kubelet[2434]: E0620 18:55:12.872249 2434 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-2-0-5-00d7cf22d6\" not found" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:13.028520 kubelet[2434]: I0620 18:55:13.028468 2434 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:13.028520 kubelet[2434]: E0620 18:55:13.028514 2434 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4230-2-0-5-00d7cf22d6\": node \"ci-4230-2-0-5-00d7cf22d6\" not found" Jun 20 18:55:13.041448 kubelet[2434]: E0620 18:55:13.041400 2434 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4230-2-0-5-00d7cf22d6\" not found" Jun 20 18:55:13.099261 kubelet[2434]: I0620 18:55:13.099219 2434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:13.104682 kubelet[2434]: E0620 18:55:13.104377 2434 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-2-0-5-00d7cf22d6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:13.104682 kubelet[2434]: I0620 18:55:13.104411 2434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:13.105722 kubelet[2434]: E0620 18:55:13.105684 2434 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-2-0-5-00d7cf22d6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:13.105722 kubelet[2434]: I0620 18:55:13.105715 2434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:13.106690 kubelet[2434]: E0620 18:55:13.106662 2434 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-0-5-00d7cf22d6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:13.779968 kubelet[2434]: I0620 18:55:13.779910 2434 apiserver.go:52] "Watching apiserver" Jun 20 18:55:13.806631 kubelet[2434]: I0620 18:55:13.806537 2434 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:55:14.481928 kubelet[2434]: I0620 18:55:14.481350 2434 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:14.974572 systemd[1]: Reload requested from client PID 2715 ('systemctl') (unit session-7.scope)... Jun 20 18:55:14.974600 systemd[1]: Reloading... Jun 20 18:55:15.078249 zram_generator::config[2761]: No configuration found. Jun 20 18:55:15.164452 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 18:55:15.260271 systemd[1]: Reloading finished in 285 ms. Jun 20 18:55:15.282192 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:55:15.297732 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 18:55:15.297909 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:55:15.297960 systemd[1]: kubelet.service: Consumed 1.111s CPU time, 128M memory peak. Jun 20 18:55:15.303408 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 18:55:15.406240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 18:55:15.410847 (kubelet)[2811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 18:55:15.447316 kubelet[2811]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:55:15.447316 kubelet[2811]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jun 20 18:55:15.447316 kubelet[2811]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 18:55:15.448126 kubelet[2811]: I0620 18:55:15.448076 2811 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 18:55:15.453136 kubelet[2811]: I0620 18:55:15.453108 2811 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jun 20 18:55:15.453136 kubelet[2811]: I0620 18:55:15.453129 2811 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 18:55:15.453353 kubelet[2811]: I0620 18:55:15.453331 2811 server.go:956] "Client rotation is on, will bootstrap in background" Jun 20 18:55:15.455149 kubelet[2811]: I0620 18:55:15.455126 2811 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jun 20 18:55:15.460480 kubelet[2811]: I0620 18:55:15.460456 2811 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 18:55:15.467014 kubelet[2811]: E0620 18:55:15.466971 2811 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jun 20 18:55:15.467014 kubelet[2811]: I0620 18:55:15.467002 2811 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jun 20 18:55:15.471127 kubelet[2811]: I0620 18:55:15.471100 2811 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 18:55:15.471343 kubelet[2811]: I0620 18:55:15.471315 2811 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 18:55:15.471477 kubelet[2811]: I0620 18:55:15.471337 2811 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-2-0-5-00d7cf22d6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 18:55:15.471477 kubelet[2811]: I0620 18:55:15.471475 2811 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 18:55:15.471574 kubelet[2811]: I0620 18:55:15.471483 2811 container_manager_linux.go:303] "Creating device plugin manager" Jun 20 18:55:15.471574 kubelet[2811]: I0620 18:55:15.471515 2811 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:55:15.471706 kubelet[2811]: I0620 18:55:15.471634 2811 kubelet.go:480] "Attempting to sync node with API server" Jun 20 18:55:15.471706 kubelet[2811]: I0620 18:55:15.471650 2811 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 18:55:15.473572 kubelet[2811]: I0620 18:55:15.473546 2811 kubelet.go:386] "Adding apiserver pod source" Jun 20 18:55:15.473618 kubelet[2811]: I0620 18:55:15.473575 2811 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 18:55:15.483508 kubelet[2811]: I0620 18:55:15.483455 2811 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jun 20 18:55:15.483870 kubelet[2811]: I0620 18:55:15.483844 2811 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jun 20 18:55:15.488073 kubelet[2811]: I0620 18:55:15.488045 2811 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jun 20 18:55:15.488118 kubelet[2811]: I0620 18:55:15.488097 2811 server.go:1289] "Started kubelet" Jun 20 18:55:15.488611 kubelet[2811]: I0620 18:55:15.488182 2811 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 18:55:15.488611 kubelet[2811]: I0620 18:55:15.488396 2811 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 18:55:15.489661 kubelet[2811]: I0620 18:55:15.488602 2811 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 18:55:15.490431 kubelet[2811]: I0620 18:55:15.490414 2811 server.go:317] "Adding debug handlers to kubelet server" Jun 20 18:55:15.494773 kubelet[2811]: I0620 18:55:15.494755 2811 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 18:55:15.496239 kubelet[2811]: I0620 18:55:15.496138 2811 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 18:55:15.498609 kubelet[2811]: I0620 18:55:15.498584 2811 volume_manager.go:297] "Starting Kubelet Volume Manager" Jun 20 18:55:15.498737 kubelet[2811]: I0620 18:55:15.498716 2811 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jun 20 18:55:15.498815 kubelet[2811]: I0620 18:55:15.498796 2811 reconciler.go:26] "Reconciler: start to sync state" Jun 20 18:55:15.501894 kubelet[2811]: E0620 18:55:15.501786 2811 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 18:55:15.502724 kubelet[2811]: I0620 18:55:15.502692 2811 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 18:55:15.506227 kubelet[2811]: I0620 18:55:15.506123 2811 factory.go:223] Registration of the containerd container factory successfully Jun 20 18:55:15.506227 kubelet[2811]: I0620 18:55:15.506138 2811 factory.go:223] Registration of the systemd container factory successfully Jun 20 18:55:15.508797 kubelet[2811]: I0620 18:55:15.508767 2811 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jun 20 18:55:15.509602 kubelet[2811]: I0620 18:55:15.509570 2811 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jun 20 18:55:15.509602 kubelet[2811]: I0620 18:55:15.509591 2811 status_manager.go:230] "Starting to sync pod status with apiserver" Jun 20 18:55:15.509602 kubelet[2811]: I0620 18:55:15.509606 2811 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jun 20 18:55:15.509690 kubelet[2811]: I0620 18:55:15.509612 2811 kubelet.go:2436] "Starting kubelet main sync loop" Jun 20 18:55:15.509690 kubelet[2811]: E0620 18:55:15.509642 2811 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 18:55:15.550641 kubelet[2811]: I0620 18:55:15.550549 2811 cpu_manager.go:221] "Starting CPU manager" policy="none" Jun 20 18:55:15.550641 kubelet[2811]: I0620 18:55:15.550565 2811 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jun 20 18:55:15.550641 kubelet[2811]: I0620 18:55:15.550580 2811 state_mem.go:36] "Initialized new in-memory state store" Jun 20 18:55:15.550773 kubelet[2811]: I0620 18:55:15.550700 2811 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 18:55:15.550773 kubelet[2811]: I0620 18:55:15.550709 2811 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 18:55:15.550773 kubelet[2811]: I0620 18:55:15.550724 2811 policy_none.go:49] "None policy: Start" Jun 20 18:55:15.550773 kubelet[2811]: I0620 18:55:15.550733 2811 memory_manager.go:186] "Starting memorymanager" policy="None" Jun 20 18:55:15.550773 kubelet[2811]: I0620 18:55:15.550740 2811 state_mem.go:35] "Initializing new in-memory state store" Jun 20 18:55:15.550858 kubelet[2811]: I0620 18:55:15.550812 2811 state_mem.go:75] "Updated machine memory state" Jun 20 18:55:15.554499 kubelet[2811]: E0620 18:55:15.554481 2811 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jun 20 18:55:15.554888 kubelet[2811]: I0620 18:55:15.554669 2811 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 18:55:15.554888 kubelet[2811]: I0620 18:55:15.554681 2811 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 18:55:15.555152 kubelet[2811]: I0620 18:55:15.555130 2811 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 18:55:15.559091 kubelet[2811]: E0620 18:55:15.558477 2811 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jun 20 18:55:15.610503 kubelet[2811]: I0620 18:55:15.610456 2811 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.610787 kubelet[2811]: I0620 18:55:15.610760 2811 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.610919 kubelet[2811]: I0620 18:55:15.610588 2811 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.617329 kubelet[2811]: E0620 18:55:15.617274 2811 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-0-5-00d7cf22d6\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.664746 kubelet[2811]: I0620 18:55:15.664695 2811 kubelet_node_status.go:75] "Attempting to register node" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.671675 kubelet[2811]: I0620 18:55:15.671600 2811 kubelet_node_status.go:124] "Node was previously registered" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.671865 kubelet[2811]: I0620 18:55:15.671832 2811 kubelet_node_status.go:78] "Successfully registered node" node="ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.799985 kubelet[2811]: I0620 18:55:15.799924 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b846a466010712fecae5dbe23d418a55-kubeconfig\") pod \"kube-scheduler-ci-4230-2-0-5-00d7cf22d6\" (UID: \"b846a466010712fecae5dbe23d418a55\") " pod="kube-system/kube-scheduler-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.799985 kubelet[2811]: I0620 18:55:15.799984 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e1d317b92878c4cd4c95b4aa003c9d5-k8s-certs\") pod \"kube-apiserver-ci-4230-2-0-5-00d7cf22d6\" (UID: \"6e1d317b92878c4cd4c95b4aa003c9d5\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.800150 kubelet[2811]: I0620 18:55:15.800013 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e1d317b92878c4cd4c95b4aa003c9d5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-2-0-5-00d7cf22d6\" (UID: \"6e1d317b92878c4cd4c95b4aa003c9d5\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.800150 kubelet[2811]: I0620 18:55:15.800045 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fce90edc3027283ed06f70c908cf7996-kubeconfig\") pod \"kube-controller-manager-ci-4230-2-0-5-00d7cf22d6\" (UID: \"fce90edc3027283ed06f70c908cf7996\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.800150 kubelet[2811]: I0620 18:55:15.800070 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fce90edc3027283ed06f70c908cf7996-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-2-0-5-00d7cf22d6\" (UID: \"fce90edc3027283ed06f70c908cf7996\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.800150 kubelet[2811]: I0620 18:55:15.800094 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e1d317b92878c4cd4c95b4aa003c9d5-ca-certs\") pod \"kube-apiserver-ci-4230-2-0-5-00d7cf22d6\" (UID: \"6e1d317b92878c4cd4c95b4aa003c9d5\") " pod="kube-system/kube-apiserver-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.800150 kubelet[2811]: I0620 18:55:15.800116 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fce90edc3027283ed06f70c908cf7996-ca-certs\") pod \"kube-controller-manager-ci-4230-2-0-5-00d7cf22d6\" (UID: \"fce90edc3027283ed06f70c908cf7996\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.800287 kubelet[2811]: I0620 18:55:15.800151 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fce90edc3027283ed06f70c908cf7996-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-2-0-5-00d7cf22d6\" (UID: \"fce90edc3027283ed06f70c908cf7996\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.800287 kubelet[2811]: I0620 18:55:15.800174 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fce90edc3027283ed06f70c908cf7996-k8s-certs\") pod \"kube-controller-manager-ci-4230-2-0-5-00d7cf22d6\" (UID: \"fce90edc3027283ed06f70c908cf7996\") " pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:15.989410 sudo[2847]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 18:55:15.990036 sudo[2847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 18:55:16.476772 kubelet[2811]: I0620 18:55:16.476648 2811 apiserver.go:52] "Watching apiserver" Jun 20 18:55:16.499293 kubelet[2811]: I0620 18:55:16.499254 2811 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jun 20 18:55:16.500811 sudo[2847]: pam_unix(sudo:session): session closed for user root Jun 20 18:55:16.540049 kubelet[2811]: I0620 18:55:16.539920 2811 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:16.548584 kubelet[2811]: E0620 18:55:16.548416 2811 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-2-0-5-00d7cf22d6\" already exists" pod="kube-system/kube-scheduler-ci-4230-2-0-5-00d7cf22d6" Jun 20 18:55:16.571560 kubelet[2811]: I0620 18:55:16.571100 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-2-0-5-00d7cf22d6" podStartSLOduration=1.571082782 podStartE2EDuration="1.571082782s" podCreationTimestamp="2025-06-20 18:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:55:16.571082812 +0000 UTC m=+1.155607372" watchObservedRunningTime="2025-06-20 18:55:16.571082782 +0000 UTC m=+1.155607342" Jun 20 18:55:16.571560 kubelet[2811]: I0620 18:55:16.571195 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-2-0-5-00d7cf22d6" podStartSLOduration=2.571191887 podStartE2EDuration="2.571191887s" podCreationTimestamp="2025-06-20 18:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:55:16.560770684 +0000 UTC m=+1.145295243" watchObservedRunningTime="2025-06-20 18:55:16.571191887 +0000 UTC m=+1.155716447" Jun 20 18:55:16.589756 kubelet[2811]: I0620 18:55:16.589608 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-2-0-5-00d7cf22d6" podStartSLOduration=1.5895947719999999 podStartE2EDuration="1.589594772s" podCreationTimestamp="2025-06-20 18:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:55:16.580813326 +0000 UTC m=+1.165337886" watchObservedRunningTime="2025-06-20 18:55:16.589594772 +0000 UTC m=+1.174119332" Jun 20 18:55:17.819910 sudo[1902]: pam_unix(sudo:session): session closed for user root Jun 20 18:55:17.977279 sshd[1901]: Connection closed by 139.178.68.195 port 53236 Jun 20 18:55:17.978553 sshd-session[1899]: pam_unix(sshd:session): session closed for user core Jun 20 18:55:17.980964 systemd[1]: sshd@6-46.62.134.149:22-139.178.68.195:53236.service: Deactivated successfully. Jun 20 18:55:17.982640 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 18:55:17.982808 systemd[1]: session-7.scope: Consumed 4.420s CPU time, 210M memory peak. Jun 20 18:55:17.984576 systemd-logind[1491]: Session 7 logged out. Waiting for processes to exit. Jun 20 18:55:17.985630 systemd-logind[1491]: Removed session 7. Jun 20 18:55:20.940535 kubelet[2811]: I0620 18:55:20.940482 2811 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 18:55:20.941063 kubelet[2811]: I0620 18:55:20.941002 2811 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 18:55:20.941115 containerd[1508]: time="2025-06-20T18:55:20.940855794Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 18:55:21.759397 systemd[1]: Created slice kubepods-besteffort-pode2a1b105_d1e1_49c1_9969_66857ba871c7.slice - libcontainer container kubepods-besteffort-pode2a1b105_d1e1_49c1_9969_66857ba871c7.slice. Jun 20 18:55:21.772762 systemd[1]: Created slice kubepods-burstable-pod4d2b1ee2_2507_4d23_8baa_50d119ad9da7.slice - libcontainer container kubepods-burstable-pod4d2b1ee2_2507_4d23_8baa_50d119ad9da7.slice. Jun 20 18:55:21.839354 kubelet[2811]: I0620 18:55:21.839313 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2a1b105-d1e1-49c1-9969-66857ba871c7-lib-modules\") pod \"kube-proxy-28klh\" (UID: \"e2a1b105-d1e1-49c1-9969-66857ba871c7\") " pod="kube-system/kube-proxy-28klh" Jun 20 18:55:21.839354 kubelet[2811]: I0620 18:55:21.839352 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-bpf-maps\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.839749 kubelet[2811]: I0620 18:55:21.839373 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-clustermesh-secrets\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.839749 kubelet[2811]: I0620 18:55:21.839389 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-hubble-tls\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.839749 kubelet[2811]: I0620 18:55:21.839404 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e2a1b105-d1e1-49c1-9969-66857ba871c7-kube-proxy\") pod \"kube-proxy-28klh\" (UID: \"e2a1b105-d1e1-49c1-9969-66857ba871c7\") " pod="kube-system/kube-proxy-28klh" Jun 20 18:55:21.839749 kubelet[2811]: I0620 18:55:21.839461 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-hostproc\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.839749 kubelet[2811]: I0620 18:55:21.839498 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cni-path\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.839749 kubelet[2811]: I0620 18:55:21.839514 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-etc-cni-netd\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.840080 kubelet[2811]: I0620 18:55:21.839531 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-host-proc-sys-kernel\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.840080 kubelet[2811]: I0620 18:55:21.839560 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mdpx\" (UniqueName: \"kubernetes.io/projected/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-kube-api-access-5mdpx\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.840080 kubelet[2811]: I0620 18:55:21.839595 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2a1b105-d1e1-49c1-9969-66857ba871c7-xtables-lock\") pod \"kube-proxy-28klh\" (UID: \"e2a1b105-d1e1-49c1-9969-66857ba871c7\") " pod="kube-system/kube-proxy-28klh" Jun 20 18:55:21.840080 kubelet[2811]: I0620 18:55:21.839615 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqwqq\" (UniqueName: \"kubernetes.io/projected/e2a1b105-d1e1-49c1-9969-66857ba871c7-kube-api-access-vqwqq\") pod \"kube-proxy-28klh\" (UID: \"e2a1b105-d1e1-49c1-9969-66857ba871c7\") " pod="kube-system/kube-proxy-28klh" Jun 20 18:55:21.840080 kubelet[2811]: I0620 18:55:21.839640 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-cgroup\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.840226 kubelet[2811]: I0620 18:55:21.839664 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-lib-modules\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.840226 kubelet[2811]: I0620 18:55:21.839678 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-config-path\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.840226 kubelet[2811]: I0620 18:55:21.839693 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-host-proc-sys-net\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.840226 kubelet[2811]: I0620 18:55:21.839735 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-run\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:21.840226 kubelet[2811]: I0620 18:55:21.839814 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-xtables-lock\") pod \"cilium-f89f5\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " pod="kube-system/cilium-f89f5" Jun 20 18:55:22.050745 systemd[1]: Created slice kubepods-besteffort-pod4188709a_8197_4414_a06c_bc7e73411cf5.slice - libcontainer container kubepods-besteffort-pod4188709a_8197_4414_a06c_bc7e73411cf5.slice. Jun 20 18:55:22.071034 containerd[1508]: time="2025-06-20T18:55:22.070970776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-28klh,Uid:e2a1b105-d1e1-49c1-9969-66857ba871c7,Namespace:kube-system,Attempt:0,}" Jun 20 18:55:22.075810 containerd[1508]: time="2025-06-20T18:55:22.075786451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f89f5,Uid:4d2b1ee2-2507-4d23-8baa-50d119ad9da7,Namespace:kube-system,Attempt:0,}" Jun 20 18:55:22.107116 containerd[1508]: time="2025-06-20T18:55:22.107025211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:55:22.107116 containerd[1508]: time="2025-06-20T18:55:22.107034730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:55:22.107640 containerd[1508]: time="2025-06-20T18:55:22.107119455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:55:22.107640 containerd[1508]: time="2025-06-20T18:55:22.107154143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:22.107640 containerd[1508]: time="2025-06-20T18:55:22.107311241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:22.107716 containerd[1508]: time="2025-06-20T18:55:22.107640794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:55:22.107716 containerd[1508]: time="2025-06-20T18:55:22.107694209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:22.107812 containerd[1508]: time="2025-06-20T18:55:22.107780307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:22.121325 systemd[1]: Started cri-containerd-5058e304fce1a519cd9d6209f673a5ddbe2a71a14cdf26f1d5805c313a746d5c.scope - libcontainer container 5058e304fce1a519cd9d6209f673a5ddbe2a71a14cdf26f1d5805c313a746d5c. Jun 20 18:55:22.125669 systemd[1]: Started cri-containerd-3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e.scope - libcontainer container 3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e. Jun 20 18:55:22.141568 kubelet[2811]: I0620 18:55:22.141433 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rglgc\" (UniqueName: \"kubernetes.io/projected/4188709a-8197-4414-a06c-bc7e73411cf5-kube-api-access-rglgc\") pod \"cilium-operator-6c4d7847fc-bcj7h\" (UID: \"4188709a-8197-4414-a06c-bc7e73411cf5\") " pod="kube-system/cilium-operator-6c4d7847fc-bcj7h" Jun 20 18:55:22.141568 kubelet[2811]: I0620 18:55:22.141493 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4188709a-8197-4414-a06c-bc7e73411cf5-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-bcj7h\" (UID: \"4188709a-8197-4414-a06c-bc7e73411cf5\") " pod="kube-system/cilium-operator-6c4d7847fc-bcj7h" Jun 20 18:55:22.146979 containerd[1508]: time="2025-06-20T18:55:22.146497127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-28klh,Uid:e2a1b105-d1e1-49c1-9969-66857ba871c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"5058e304fce1a519cd9d6209f673a5ddbe2a71a14cdf26f1d5805c313a746d5c\"" Jun 20 18:55:22.151564 containerd[1508]: time="2025-06-20T18:55:22.151458296Z" level=info msg="CreateContainer within sandbox \"5058e304fce1a519cd9d6209f673a5ddbe2a71a14cdf26f1d5805c313a746d5c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 18:55:22.153729 containerd[1508]: time="2025-06-20T18:55:22.153706191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f89f5,Uid:4d2b1ee2-2507-4d23-8baa-50d119ad9da7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\"" Jun 20 18:55:22.155831 containerd[1508]: time="2025-06-20T18:55:22.155542230Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 18:55:22.168527 containerd[1508]: time="2025-06-20T18:55:22.168477969Z" level=info msg="CreateContainer within sandbox \"5058e304fce1a519cd9d6209f673a5ddbe2a71a14cdf26f1d5805c313a746d5c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"355108a16749a8f2d1ff693d47c206945b5cd0894fb7158159e084466287fd80\"" Jun 20 18:55:22.169500 containerd[1508]: time="2025-06-20T18:55:22.169223296Z" level=info msg="StartContainer for \"355108a16749a8f2d1ff693d47c206945b5cd0894fb7158159e084466287fd80\"" Jun 20 18:55:22.188380 systemd[1]: Started cri-containerd-355108a16749a8f2d1ff693d47c206945b5cd0894fb7158159e084466287fd80.scope - libcontainer container 355108a16749a8f2d1ff693d47c206945b5cd0894fb7158159e084466287fd80. Jun 20 18:55:22.210527 containerd[1508]: time="2025-06-20T18:55:22.210489328Z" level=info msg="StartContainer for \"355108a16749a8f2d1ff693d47c206945b5cd0894fb7158159e084466287fd80\" returns successfully" Jun 20 18:55:22.356058 containerd[1508]: time="2025-06-20T18:55:22.356011730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bcj7h,Uid:4188709a-8197-4414-a06c-bc7e73411cf5,Namespace:kube-system,Attempt:0,}" Jun 20 18:55:22.383485 containerd[1508]: time="2025-06-20T18:55:22.383323433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:55:22.383485 containerd[1508]: time="2025-06-20T18:55:22.383358100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:55:22.383485 containerd[1508]: time="2025-06-20T18:55:22.383366798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:22.383485 containerd[1508]: time="2025-06-20T18:55:22.383417266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:22.407328 systemd[1]: Started cri-containerd-14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1.scope - libcontainer container 14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1. Jun 20 18:55:22.445487 containerd[1508]: time="2025-06-20T18:55:22.445436221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-bcj7h,Uid:4188709a-8197-4414-a06c-bc7e73411cf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1\"" Jun 20 18:55:22.572663 kubelet[2811]: I0620 18:55:22.572611 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-28klh" podStartSLOduration=1.572596694 podStartE2EDuration="1.572596694s" podCreationTimestamp="2025-06-20 18:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:55:22.571641516 +0000 UTC m=+7.156166077" watchObservedRunningTime="2025-06-20 18:55:22.572596694 +0000 UTC m=+7.157121264" Jun 20 18:55:26.035823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2411946014.mount: Deactivated successfully. Jun 20 18:55:27.396559 containerd[1508]: time="2025-06-20T18:55:27.396515478Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:27.397126 containerd[1508]: time="2025-06-20T18:55:27.397050078Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 18:55:27.399487 containerd[1508]: time="2025-06-20T18:55:27.399438015Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:27.426495 containerd[1508]: time="2025-06-20T18:55:27.426440859Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 5.270868109s" Jun 20 18:55:27.426495 containerd[1508]: time="2025-06-20T18:55:27.426485165Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 18:55:27.427544 containerd[1508]: time="2025-06-20T18:55:27.427502825Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 18:55:27.430496 containerd[1508]: time="2025-06-20T18:55:27.430435981Z" level=info msg="CreateContainer within sandbox \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 18:55:27.495062 containerd[1508]: time="2025-06-20T18:55:27.495008790Z" level=info msg="CreateContainer within sandbox \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072\"" Jun 20 18:55:27.495635 containerd[1508]: time="2025-06-20T18:55:27.495556034Z" level=info msg="StartContainer for \"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072\"" Jun 20 18:55:27.573270 systemd[1]: run-containerd-runc-k8s.io-a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072-runc.glSIum.mount: Deactivated successfully. Jun 20 18:55:27.581347 systemd[1]: Started cri-containerd-a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072.scope - libcontainer container a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072. Jun 20 18:55:27.602529 containerd[1508]: time="2025-06-20T18:55:27.602493834Z" level=info msg="StartContainer for \"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072\" returns successfully" Jun 20 18:55:27.610601 systemd[1]: cri-containerd-a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072.scope: Deactivated successfully. Jun 20 18:55:27.728945 containerd[1508]: time="2025-06-20T18:55:27.709944662Z" level=info msg="shim disconnected" id=a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072 namespace=k8s.io Jun 20 18:55:27.728945 containerd[1508]: time="2025-06-20T18:55:27.728866074Z" level=warning msg="cleaning up after shim disconnected" id=a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072 namespace=k8s.io Jun 20 18:55:27.728945 containerd[1508]: time="2025-06-20T18:55:27.728881113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:55:28.478306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072-rootfs.mount: Deactivated successfully. Jun 20 18:55:28.571669 containerd[1508]: time="2025-06-20T18:55:28.571615896Z" level=info msg="CreateContainer within sandbox \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 18:55:28.609474 containerd[1508]: time="2025-06-20T18:55:28.609396363Z" level=info msg="CreateContainer within sandbox \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2\"" Jun 20 18:55:28.613315 containerd[1508]: time="2025-06-20T18:55:28.612491400Z" level=info msg="StartContainer for \"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2\"" Jun 20 18:55:28.641342 systemd[1]: Started cri-containerd-2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2.scope - libcontainer container 2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2. Jun 20 18:55:28.662352 containerd[1508]: time="2025-06-20T18:55:28.662311981Z" level=info msg="StartContainer for \"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2\" returns successfully" Jun 20 18:55:28.672879 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 18:55:28.673417 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:55:28.673567 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:55:28.677718 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 18:55:28.678460 systemd[1]: cri-containerd-2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2.scope: Deactivated successfully. Jun 20 18:55:28.702772 containerd[1508]: time="2025-06-20T18:55:28.702693696Z" level=info msg="shim disconnected" id=2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2 namespace=k8s.io Jun 20 18:55:28.702772 containerd[1508]: time="2025-06-20T18:55:28.702757198Z" level=warning msg="cleaning up after shim disconnected" id=2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2 namespace=k8s.io Jun 20 18:55:28.702772 containerd[1508]: time="2025-06-20T18:55:28.702765565Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:55:28.708123 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 18:55:28.713398 containerd[1508]: time="2025-06-20T18:55:28.713331383Z" level=warning msg="cleanup warnings time=\"2025-06-20T18:55:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 18:55:29.478505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2-rootfs.mount: Deactivated successfully. Jun 20 18:55:29.574220 containerd[1508]: time="2025-06-20T18:55:29.573639333Z" level=info msg="CreateContainer within sandbox \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 18:55:29.601949 containerd[1508]: time="2025-06-20T18:55:29.601878457Z" level=info msg="CreateContainer within sandbox \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da\"" Jun 20 18:55:29.603558 containerd[1508]: time="2025-06-20T18:55:29.602522889Z" level=info msg="StartContainer for \"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da\"" Jun 20 18:55:29.625868 systemd[1]: run-containerd-runc-k8s.io-f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da-runc.zkMouq.mount: Deactivated successfully. Jun 20 18:55:29.635390 systemd[1]: Started cri-containerd-f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da.scope - libcontainer container f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da. Jun 20 18:55:29.673092 containerd[1508]: time="2025-06-20T18:55:29.673048480Z" level=info msg="StartContainer for \"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da\" returns successfully" Jun 20 18:55:29.687862 systemd[1]: cri-containerd-f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da.scope: Deactivated successfully. Jun 20 18:55:29.727466 containerd[1508]: time="2025-06-20T18:55:29.727369117Z" level=info msg="shim disconnected" id=f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da namespace=k8s.io Jun 20 18:55:29.727466 containerd[1508]: time="2025-06-20T18:55:29.727462138Z" level=warning msg="cleaning up after shim disconnected" id=f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da namespace=k8s.io Jun 20 18:55:29.727466 containerd[1508]: time="2025-06-20T18:55:29.727471417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:55:29.742907 containerd[1508]: time="2025-06-20T18:55:29.742806169Z" level=warning msg="cleanup warnings time=\"2025-06-20T18:55:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 18:55:30.478313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da-rootfs.mount: Deactivated successfully. Jun 20 18:55:30.579685 containerd[1508]: time="2025-06-20T18:55:30.579407844Z" level=info msg="CreateContainer within sandbox \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 18:55:30.602266 containerd[1508]: time="2025-06-20T18:55:30.602224600Z" level=info msg="CreateContainer within sandbox \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55\"" Jun 20 18:55:30.603025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201864971.mount: Deactivated successfully. Jun 20 18:55:30.603549 containerd[1508]: time="2025-06-20T18:55:30.603366396Z" level=info msg="StartContainer for \"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55\"" Jun 20 18:55:30.628279 containerd[1508]: time="2025-06-20T18:55:30.628248320Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:30.629311 containerd[1508]: time="2025-06-20T18:55:30.629272708Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 18:55:30.630452 containerd[1508]: time="2025-06-20T18:55:30.630432789Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 18:55:30.631462 containerd[1508]: time="2025-06-20T18:55:30.631436267Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.203907461s" Jun 20 18:55:30.631674 containerd[1508]: time="2025-06-20T18:55:30.631465374Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 18:55:30.636422 containerd[1508]: time="2025-06-20T18:55:30.636393447Z" level=info msg="CreateContainer within sandbox \"14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 18:55:30.650385 systemd[1]: Started cri-containerd-b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55.scope - libcontainer container b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55. Jun 20 18:55:30.674236 containerd[1508]: time="2025-06-20T18:55:30.674157309Z" level=info msg="CreateContainer within sandbox \"14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\"" Jun 20 18:55:30.675434 containerd[1508]: time="2025-06-20T18:55:30.675317962Z" level=info msg="StartContainer for \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\"" Jun 20 18:55:30.681376 systemd[1]: cri-containerd-b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55.scope: Deactivated successfully. Jun 20 18:55:30.683053 containerd[1508]: time="2025-06-20T18:55:30.682849155Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d2b1ee2_2507_4d23_8baa_50d119ad9da7.slice/cri-containerd-b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55.scope/memory.events\": no such file or directory" Jun 20 18:55:30.684771 containerd[1508]: time="2025-06-20T18:55:30.684552811Z" level=info msg="StartContainer for \"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55\" returns successfully" Jun 20 18:55:30.702397 systemd[1]: Started cri-containerd-83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248.scope - libcontainer container 83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248. Jun 20 18:55:30.710166 containerd[1508]: time="2025-06-20T18:55:30.710035432Z" level=info msg="shim disconnected" id=b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55 namespace=k8s.io Jun 20 18:55:30.710331 containerd[1508]: time="2025-06-20T18:55:30.710172216Z" level=warning msg="cleaning up after shim disconnected" id=b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55 namespace=k8s.io Jun 20 18:55:30.710331 containerd[1508]: time="2025-06-20T18:55:30.710182156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 18:55:30.731734 containerd[1508]: time="2025-06-20T18:55:30.731564569Z" level=info msg="StartContainer for \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\" returns successfully" Jun 20 18:55:31.480074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55-rootfs.mount: Deactivated successfully. Jun 20 18:55:31.588653 containerd[1508]: time="2025-06-20T18:55:31.588451859Z" level=info msg="CreateContainer within sandbox \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 18:55:31.608297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1326354120.mount: Deactivated successfully. Jun 20 18:55:31.608704 containerd[1508]: time="2025-06-20T18:55:31.608658078Z" level=info msg="CreateContainer within sandbox \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\"" Jun 20 18:55:31.609972 containerd[1508]: time="2025-06-20T18:55:31.609301917Z" level=info msg="StartContainer for \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\"" Jun 20 18:55:31.625945 kubelet[2811]: I0620 18:55:31.622870 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-bcj7h" podStartSLOduration=1.4368589059999999 podStartE2EDuration="9.622851539s" podCreationTimestamp="2025-06-20 18:55:22 +0000 UTC" firstStartedPulling="2025-06-20 18:55:22.446695632 +0000 UTC m=+7.031220193" lastFinishedPulling="2025-06-20 18:55:30.632688267 +0000 UTC m=+15.217212826" observedRunningTime="2025-06-20 18:55:31.595288405 +0000 UTC m=+16.179812975" watchObservedRunningTime="2025-06-20 18:55:31.622851539 +0000 UTC m=+16.207376099" Jun 20 18:55:31.644317 systemd[1]: Started cri-containerd-9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456.scope - libcontainer container 9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456. Jun 20 18:55:31.668741 containerd[1508]: time="2025-06-20T18:55:31.668698686Z" level=info msg="StartContainer for \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\" returns successfully" Jun 20 18:55:31.823656 kubelet[2811]: I0620 18:55:31.823606 2811 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jun 20 18:55:31.861538 systemd[1]: Created slice kubepods-burstable-podb65bf041_5f41_4c29_a362_adc8229ba652.slice - libcontainer container kubepods-burstable-podb65bf041_5f41_4c29_a362_adc8229ba652.slice. Jun 20 18:55:31.871154 systemd[1]: Created slice kubepods-burstable-podbe15b503_e287_452a_bcda_c87e9dff39ab.slice - libcontainer container kubepods-burstable-podbe15b503_e287_452a_bcda_c87e9dff39ab.slice. Jun 20 18:55:31.897933 kubelet[2811]: I0620 18:55:31.897891 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58ngk\" (UniqueName: \"kubernetes.io/projected/be15b503-e287-452a-bcda-c87e9dff39ab-kube-api-access-58ngk\") pod \"coredns-674b8bbfcf-zp4zf\" (UID: \"be15b503-e287-452a-bcda-c87e9dff39ab\") " pod="kube-system/coredns-674b8bbfcf-zp4zf" Jun 20 18:55:31.898237 kubelet[2811]: I0620 18:55:31.898109 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b65bf041-5f41-4c29-a362-adc8229ba652-config-volume\") pod \"coredns-674b8bbfcf-9mpx9\" (UID: \"b65bf041-5f41-4c29-a362-adc8229ba652\") " pod="kube-system/coredns-674b8bbfcf-9mpx9" Jun 20 18:55:31.898237 kubelet[2811]: I0620 18:55:31.898134 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79mvg\" (UniqueName: \"kubernetes.io/projected/b65bf041-5f41-4c29-a362-adc8229ba652-kube-api-access-79mvg\") pod \"coredns-674b8bbfcf-9mpx9\" (UID: \"b65bf041-5f41-4c29-a362-adc8229ba652\") " pod="kube-system/coredns-674b8bbfcf-9mpx9" Jun 20 18:55:31.898237 kubelet[2811]: I0620 18:55:31.898155 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be15b503-e287-452a-bcda-c87e9dff39ab-config-volume\") pod \"coredns-674b8bbfcf-zp4zf\" (UID: \"be15b503-e287-452a-bcda-c87e9dff39ab\") " pod="kube-system/coredns-674b8bbfcf-zp4zf" Jun 20 18:55:32.167250 containerd[1508]: time="2025-06-20T18:55:32.166759106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9mpx9,Uid:b65bf041-5f41-4c29-a362-adc8229ba652,Namespace:kube-system,Attempt:0,}" Jun 20 18:55:32.175023 containerd[1508]: time="2025-06-20T18:55:32.174752914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zp4zf,Uid:be15b503-e287-452a-bcda-c87e9dff39ab,Namespace:kube-system,Attempt:0,}" Jun 20 18:55:32.605816 kubelet[2811]: I0620 18:55:32.605321 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-f89f5" podStartSLOduration=6.332766811 podStartE2EDuration="11.605301719s" podCreationTimestamp="2025-06-20 18:55:21 +0000 UTC" firstStartedPulling="2025-06-20 18:55:22.154835318 +0000 UTC m=+6.739359879" lastFinishedPulling="2025-06-20 18:55:27.427370227 +0000 UTC m=+12.011894787" observedRunningTime="2025-06-20 18:55:32.603264924 +0000 UTC m=+17.187789494" watchObservedRunningTime="2025-06-20 18:55:32.605301719 +0000 UTC m=+17.189826289" Jun 20 18:55:34.574448 systemd-networkd[1403]: cilium_host: Link UP Jun 20 18:55:34.575164 systemd-networkd[1403]: cilium_net: Link UP Jun 20 18:55:34.575982 systemd-networkd[1403]: cilium_net: Gained carrier Jun 20 18:55:34.576671 systemd-networkd[1403]: cilium_host: Gained carrier Jun 20 18:55:34.658516 systemd-networkd[1403]: cilium_vxlan: Link UP Jun 20 18:55:34.658525 systemd-networkd[1403]: cilium_vxlan: Gained carrier Jun 20 18:55:34.929364 systemd-networkd[1403]: cilium_net: Gained IPv6LL Jun 20 18:55:34.952277 kernel: NET: Registered PF_ALG protocol family Jun 20 18:55:35.233740 systemd-networkd[1403]: cilium_host: Gained IPv6LL Jun 20 18:55:35.493833 systemd-networkd[1403]: lxc_health: Link UP Jun 20 18:55:35.502381 systemd-networkd[1403]: lxc_health: Gained carrier Jun 20 18:55:35.745879 systemd-networkd[1403]: cilium_vxlan: Gained IPv6LL Jun 20 18:55:35.755326 systemd-networkd[1403]: lxc71fd73a278b6: Link UP Jun 20 18:55:35.757756 kernel: eth0: renamed from tmpa7b5a Jun 20 18:55:35.766309 systemd-networkd[1403]: lxc71fd73a278b6: Gained carrier Jun 20 18:55:35.781727 kernel: eth0: renamed from tmp17c1c Jun 20 18:55:35.781007 systemd-networkd[1403]: lxc860d6a5b1478: Link UP Jun 20 18:55:35.789541 systemd-networkd[1403]: lxc860d6a5b1478: Gained carrier Jun 20 18:55:36.642271 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jun 20 18:55:36.833615 systemd-networkd[1403]: lxc71fd73a278b6: Gained IPv6LL Jun 20 18:55:37.091874 systemd-networkd[1403]: lxc860d6a5b1478: Gained IPv6LL Jun 20 18:55:38.958314 containerd[1508]: time="2025-06-20T18:55:38.957359913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:55:38.958314 containerd[1508]: time="2025-06-20T18:55:38.957417795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:55:38.958314 containerd[1508]: time="2025-06-20T18:55:38.957431401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:38.958314 containerd[1508]: time="2025-06-20T18:55:38.957503000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:38.982320 systemd[1]: Started cri-containerd-a7b5ad86c366b6a4a233ea4686a00f941c1b4a9f349d3c36c9da1818fb7b2366.scope - libcontainer container a7b5ad86c366b6a4a233ea4686a00f941c1b4a9f349d3c36c9da1818fb7b2366. Jun 20 18:55:38.990427 containerd[1508]: time="2025-06-20T18:55:38.990168613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 18:55:38.990427 containerd[1508]: time="2025-06-20T18:55:38.990242135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 18:55:38.990427 containerd[1508]: time="2025-06-20T18:55:38.990255731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:38.990427 containerd[1508]: time="2025-06-20T18:55:38.990328121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 18:55:39.015344 systemd[1]: Started cri-containerd-17c1cd53bdf5c883a2099f0f6167a71f6c2c58f5d711512a7f2ab7f760bbd92f.scope - libcontainer container 17c1cd53bdf5c883a2099f0f6167a71f6c2c58f5d711512a7f2ab7f760bbd92f. Jun 20 18:55:39.058481 containerd[1508]: time="2025-06-20T18:55:39.058348396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9mpx9,Uid:b65bf041-5f41-4c29-a362-adc8229ba652,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7b5ad86c366b6a4a233ea4686a00f941c1b4a9f349d3c36c9da1818fb7b2366\"" Jun 20 18:55:39.069026 containerd[1508]: time="2025-06-20T18:55:39.068432439Z" level=info msg="CreateContainer within sandbox \"a7b5ad86c366b6a4a233ea4686a00f941c1b4a9f349d3c36c9da1818fb7b2366\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:55:39.076917 containerd[1508]: time="2025-06-20T18:55:39.076889356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zp4zf,Uid:be15b503-e287-452a-bcda-c87e9dff39ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"17c1cd53bdf5c883a2099f0f6167a71f6c2c58f5d711512a7f2ab7f760bbd92f\"" Jun 20 18:55:39.081600 containerd[1508]: time="2025-06-20T18:55:39.081573808Z" level=info msg="CreateContainer within sandbox \"17c1cd53bdf5c883a2099f0f6167a71f6c2c58f5d711512a7f2ab7f760bbd92f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 18:55:39.094458 containerd[1508]: time="2025-06-20T18:55:39.093667326Z" level=info msg="CreateContainer within sandbox \"a7b5ad86c366b6a4a233ea4686a00f941c1b4a9f349d3c36c9da1818fb7b2366\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"88e9387f612795ab2b9fe5174dffd5822c509a544914715c5458a4d415c55a3f\"" Jun 20 18:55:39.095217 containerd[1508]: time="2025-06-20T18:55:39.095177107Z" level=info msg="StartContainer for \"88e9387f612795ab2b9fe5174dffd5822c509a544914715c5458a4d415c55a3f\"" Jun 20 18:55:39.098966 containerd[1508]: time="2025-06-20T18:55:39.098931867Z" level=info msg="CreateContainer within sandbox \"17c1cd53bdf5c883a2099f0f6167a71f6c2c58f5d711512a7f2ab7f760bbd92f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"471a5b5cd91c2316ac4e268702b43bd268925e5223759413bb6df2c87b383e79\"" Jun 20 18:55:39.099298 containerd[1508]: time="2025-06-20T18:55:39.099274307Z" level=info msg="StartContainer for \"471a5b5cd91c2316ac4e268702b43bd268925e5223759413bb6df2c87b383e79\"" Jun 20 18:55:39.129338 systemd[1]: Started cri-containerd-88e9387f612795ab2b9fe5174dffd5822c509a544914715c5458a4d415c55a3f.scope - libcontainer container 88e9387f612795ab2b9fe5174dffd5822c509a544914715c5458a4d415c55a3f. Jun 20 18:55:39.132806 systemd[1]: Started cri-containerd-471a5b5cd91c2316ac4e268702b43bd268925e5223759413bb6df2c87b383e79.scope - libcontainer container 471a5b5cd91c2316ac4e268702b43bd268925e5223759413bb6df2c87b383e79. Jun 20 18:55:39.162379 containerd[1508]: time="2025-06-20T18:55:39.162263560Z" level=info msg="StartContainer for \"88e9387f612795ab2b9fe5174dffd5822c509a544914715c5458a4d415c55a3f\" returns successfully" Jun 20 18:55:39.162379 containerd[1508]: time="2025-06-20T18:55:39.162341731Z" level=info msg="StartContainer for \"471a5b5cd91c2316ac4e268702b43bd268925e5223759413bb6df2c87b383e79\" returns successfully" Jun 20 18:55:39.616955 kubelet[2811]: I0620 18:55:39.616865 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zp4zf" podStartSLOduration=17.616843606 podStartE2EDuration="17.616843606s" podCreationTimestamp="2025-06-20 18:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:55:39.615991182 +0000 UTC m=+24.200515753" watchObservedRunningTime="2025-06-20 18:55:39.616843606 +0000 UTC m=+24.201368177" Jun 20 18:55:39.630717 kubelet[2811]: I0620 18:55:39.630526 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9mpx9" podStartSLOduration=17.63050632 podStartE2EDuration="17.63050632s" podCreationTimestamp="2025-06-20 18:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 18:55:39.629265016 +0000 UTC m=+24.213789586" watchObservedRunningTime="2025-06-20 18:55:39.63050632 +0000 UTC m=+24.215030881" Jun 20 18:55:39.962930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1551858816.mount: Deactivated successfully. Jun 20 18:58:07.658110 update_engine[1494]: I20250620 18:58:07.658039 1494 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jun 20 18:58:07.658110 update_engine[1494]: I20250620 18:58:07.658100 1494 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jun 20 18:58:07.660743 update_engine[1494]: I20250620 18:58:07.659638 1494 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jun 20 18:58:07.660743 update_engine[1494]: I20250620 18:58:07.660310 1494 omaha_request_params.cc:62] Current group set to stable Jun 20 18:58:07.660743 update_engine[1494]: I20250620 18:58:07.660435 1494 update_attempter.cc:499] Already updated boot flags. Skipping. Jun 20 18:58:07.660743 update_engine[1494]: I20250620 18:58:07.660444 1494 update_attempter.cc:643] Scheduling an action processor start. Jun 20 18:58:07.660743 update_engine[1494]: I20250620 18:58:07.660462 1494 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 18:58:07.660743 update_engine[1494]: I20250620 18:58:07.660492 1494 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jun 20 18:58:07.660743 update_engine[1494]: I20250620 18:58:07.660542 1494 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 18:58:07.660743 update_engine[1494]: I20250620 18:58:07.660549 1494 omaha_request_action.cc:272] Request: Jun 20 18:58:07.660743 update_engine[1494]: Jun 20 18:58:07.660743 update_engine[1494]: Jun 20 18:58:07.660743 update_engine[1494]: Jun 20 18:58:07.660743 update_engine[1494]: Jun 20 18:58:07.660743 update_engine[1494]: Jun 20 18:58:07.660743 update_engine[1494]: Jun 20 18:58:07.660743 update_engine[1494]: Jun 20 18:58:07.660743 update_engine[1494]: Jun 20 18:58:07.660743 update_engine[1494]: I20250620 18:58:07.660555 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:58:07.669920 locksmithd[1531]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jun 20 18:58:07.673826 update_engine[1494]: I20250620 18:58:07.673777 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:58:07.674360 update_engine[1494]: I20250620 18:58:07.674234 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:58:07.676436 update_engine[1494]: E20250620 18:58:07.676395 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:58:07.676515 update_engine[1494]: I20250620 18:58:07.676483 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jun 20 18:58:17.541520 update_engine[1494]: I20250620 18:58:17.541428 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:58:17.541896 update_engine[1494]: I20250620 18:58:17.541678 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:58:17.541952 update_engine[1494]: I20250620 18:58:17.541918 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:58:17.542353 update_engine[1494]: E20250620 18:58:17.542317 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:58:17.542398 update_engine[1494]: I20250620 18:58:17.542382 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jun 20 18:58:27.541542 update_engine[1494]: I20250620 18:58:27.541456 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:58:27.542031 update_engine[1494]: I20250620 18:58:27.541747 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:58:27.542031 update_engine[1494]: I20250620 18:58:27.542018 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:58:27.542530 update_engine[1494]: E20250620 18:58:27.542462 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:58:27.542530 update_engine[1494]: I20250620 18:58:27.542507 1494 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jun 20 18:58:37.538225 update_engine[1494]: I20250620 18:58:37.538126 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:58:37.538534 update_engine[1494]: I20250620 18:58:37.538385 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:58:37.538642 update_engine[1494]: I20250620 18:58:37.538609 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:58:37.538998 update_engine[1494]: E20250620 18:58:37.538969 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:58:37.539039 update_engine[1494]: I20250620 18:58:37.539008 1494 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 18:58:37.539039 update_engine[1494]: I20250620 18:58:37.539016 1494 omaha_request_action.cc:617] Omaha request response: Jun 20 18:58:37.539121 update_engine[1494]: E20250620 18:58:37.539099 1494 omaha_request_action.cc:636] Omaha request network transfer failed. Jun 20 18:58:37.539149 update_engine[1494]: I20250620 18:58:37.539122 1494 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jun 20 18:58:37.539149 update_engine[1494]: I20250620 18:58:37.539127 1494 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:58:37.539149 update_engine[1494]: I20250620 18:58:37.539132 1494 update_attempter.cc:306] Processing Done. Jun 20 18:58:37.539149 update_engine[1494]: E20250620 18:58:37.539144 1494 update_attempter.cc:619] Update failed. Jun 20 18:58:37.539235 update_engine[1494]: I20250620 18:58:37.539149 1494 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jun 20 18:58:37.539235 update_engine[1494]: I20250620 18:58:37.539153 1494 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jun 20 18:58:37.539235 update_engine[1494]: I20250620 18:58:37.539158 1494 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jun 20 18:58:37.539782 update_engine[1494]: I20250620 18:58:37.539245 1494 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jun 20 18:58:37.539782 update_engine[1494]: I20250620 18:58:37.539263 1494 omaha_request_action.cc:271] Posting an Omaha request to disabled Jun 20 18:58:37.539782 update_engine[1494]: I20250620 18:58:37.539268 1494 omaha_request_action.cc:272] Request: Jun 20 18:58:37.539782 update_engine[1494]: Jun 20 18:58:37.539782 update_engine[1494]: Jun 20 18:58:37.539782 update_engine[1494]: Jun 20 18:58:37.539782 update_engine[1494]: Jun 20 18:58:37.539782 update_engine[1494]: Jun 20 18:58:37.539782 update_engine[1494]: Jun 20 18:58:37.539782 update_engine[1494]: I20250620 18:58:37.539273 1494 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jun 20 18:58:37.539782 update_engine[1494]: I20250620 18:58:37.539384 1494 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jun 20 18:58:37.539782 update_engine[1494]: I20250620 18:58:37.539522 1494 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jun 20 18:58:37.540493 locksmithd[1531]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jun 20 18:58:37.540493 locksmithd[1531]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jun 20 18:58:37.540720 update_engine[1494]: E20250620 18:58:37.539860 1494 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jun 20 18:58:37.540720 update_engine[1494]: I20250620 18:58:37.539894 1494 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jun 20 18:58:37.540720 update_engine[1494]: I20250620 18:58:37.539901 1494 omaha_request_action.cc:617] Omaha request response: Jun 20 18:58:37.540720 update_engine[1494]: I20250620 18:58:37.539906 1494 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:58:37.540720 update_engine[1494]: I20250620 18:58:37.539909 1494 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jun 20 18:58:37.540720 update_engine[1494]: I20250620 18:58:37.539913 1494 update_attempter.cc:306] Processing Done. Jun 20 18:58:37.540720 update_engine[1494]: I20250620 18:58:37.539918 1494 update_attempter.cc:310] Error event sent. Jun 20 18:58:37.540720 update_engine[1494]: I20250620 18:58:37.539925 1494 update_check_scheduler.cc:74] Next update check in 44m55s Jun 20 18:59:52.590546 systemd[1]: Started sshd@7-46.62.134.149:22-139.178.68.195:32918.service - OpenSSH per-connection server daemon (139.178.68.195:32918). Jun 20 18:59:53.577602 sshd[4218]: Accepted publickey for core from 139.178.68.195 port 32918 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 18:59:53.579403 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 18:59:53.584502 systemd-logind[1491]: New session 8 of user core. Jun 20 18:59:53.589332 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 18:59:54.652534 sshd[4220]: Connection closed by 139.178.68.195 port 32918 Jun 20 18:59:54.653666 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Jun 20 18:59:54.660369 systemd[1]: sshd@7-46.62.134.149:22-139.178.68.195:32918.service: Deactivated successfully. Jun 20 18:59:54.662858 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 18:59:54.665897 systemd-logind[1491]: Session 8 logged out. Waiting for processes to exit. Jun 20 18:59:54.668002 systemd-logind[1491]: Removed session 8. Jun 20 18:59:59.831783 systemd[1]: Started sshd@8-46.62.134.149:22-139.178.68.195:33080.service - OpenSSH per-connection server daemon (139.178.68.195:33080). Jun 20 19:00:00.800094 sshd[4233]: Accepted publickey for core from 139.178.68.195 port 33080 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:00.801546 sshd-session[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:00.805694 systemd-logind[1491]: New session 9 of user core. Jun 20 19:00:00.810546 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:00:01.538953 sshd[4235]: Connection closed by 139.178.68.195 port 33080 Jun 20 19:00:01.539556 sshd-session[4233]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:01.542617 systemd-logind[1491]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:00:01.542894 systemd[1]: sshd@8-46.62.134.149:22-139.178.68.195:33080.service: Deactivated successfully. Jun 20 19:00:01.544615 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:00:01.545548 systemd-logind[1491]: Removed session 9. Jun 20 19:00:06.710465 systemd[1]: Started sshd@9-46.62.134.149:22-139.178.68.195:47706.service - OpenSSH per-connection server daemon (139.178.68.195:47706). Jun 20 19:00:07.676524 sshd[4248]: Accepted publickey for core from 139.178.68.195 port 47706 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:07.678014 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:07.683711 systemd-logind[1491]: New session 10 of user core. Jun 20 19:00:07.688356 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:00:08.448865 sshd[4250]: Connection closed by 139.178.68.195 port 47706 Jun 20 19:00:08.449663 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:08.453024 systemd[1]: sshd@9-46.62.134.149:22-139.178.68.195:47706.service: Deactivated successfully. Jun 20 19:00:08.455427 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:00:08.456926 systemd-logind[1491]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:00:08.458671 systemd-logind[1491]: Removed session 10. Jun 20 19:00:08.624936 systemd[1]: Started sshd@10-46.62.134.149:22-139.178.68.195:47716.service - OpenSSH per-connection server daemon (139.178.68.195:47716). Jun 20 19:00:09.605384 sshd[4263]: Accepted publickey for core from 139.178.68.195 port 47716 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:09.606939 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:09.612258 systemd-logind[1491]: New session 11 of user core. Jun 20 19:00:09.618378 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:00:10.371442 sshd[4265]: Connection closed by 139.178.68.195 port 47716 Jun 20 19:00:10.372096 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:10.375687 systemd-logind[1491]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:00:10.376368 systemd[1]: sshd@10-46.62.134.149:22-139.178.68.195:47716.service: Deactivated successfully. Jun 20 19:00:10.378334 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:00:10.379433 systemd-logind[1491]: Removed session 11. Jun 20 19:00:10.544407 systemd[1]: Started sshd@11-46.62.134.149:22-139.178.68.195:47728.service - OpenSSH per-connection server daemon (139.178.68.195:47728). Jun 20 19:00:11.519749 sshd[4275]: Accepted publickey for core from 139.178.68.195 port 47728 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:11.521437 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:11.528340 systemd-logind[1491]: New session 12 of user core. Jun 20 19:00:11.535359 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:00:12.250164 sshd[4277]: Connection closed by 139.178.68.195 port 47728 Jun 20 19:00:12.250946 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:12.254379 systemd[1]: sshd@11-46.62.134.149:22-139.178.68.195:47728.service: Deactivated successfully. Jun 20 19:00:12.257637 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:00:12.259762 systemd-logind[1491]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:00:12.260903 systemd-logind[1491]: Removed session 12. Jun 20 19:00:17.422519 systemd[1]: Started sshd@12-46.62.134.149:22-139.178.68.195:38830.service - OpenSSH per-connection server daemon (139.178.68.195:38830). Jun 20 19:00:18.386349 sshd[4292]: Accepted publickey for core from 139.178.68.195 port 38830 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:18.387601 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:18.391693 systemd-logind[1491]: New session 13 of user core. Jun 20 19:00:18.397334 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:00:19.116151 sshd[4294]: Connection closed by 139.178.68.195 port 38830 Jun 20 19:00:19.116824 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:19.120882 systemd[1]: sshd@12-46.62.134.149:22-139.178.68.195:38830.service: Deactivated successfully. Jun 20 19:00:19.123047 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:00:19.124581 systemd-logind[1491]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:00:19.125807 systemd-logind[1491]: Removed session 13. Jun 20 19:00:24.282946 systemd[1]: Started sshd@13-46.62.134.149:22-139.178.68.195:43600.service - OpenSSH per-connection server daemon (139.178.68.195:43600). Jun 20 19:00:25.252735 sshd[4308]: Accepted publickey for core from 139.178.68.195 port 43600 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:25.254064 sshd-session[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:25.259272 systemd-logind[1491]: New session 14 of user core. Jun 20 19:00:25.269366 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:00:25.980172 sshd[4310]: Connection closed by 139.178.68.195 port 43600 Jun 20 19:00:25.980906 sshd-session[4308]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:25.986714 systemd-logind[1491]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:00:25.987439 systemd[1]: sshd@13-46.62.134.149:22-139.178.68.195:43600.service: Deactivated successfully. Jun 20 19:00:25.989219 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:00:25.990145 systemd-logind[1491]: Removed session 14. Jun 20 19:00:26.152542 systemd[1]: Started sshd@14-46.62.134.149:22-139.178.68.195:43612.service - OpenSSH per-connection server daemon (139.178.68.195:43612). Jun 20 19:00:27.130054 sshd[4321]: Accepted publickey for core from 139.178.68.195 port 43612 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:27.131827 sshd-session[4321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:27.136314 systemd-logind[1491]: New session 15 of user core. Jun 20 19:00:27.141351 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:00:28.028694 sshd[4323]: Connection closed by 139.178.68.195 port 43612 Jun 20 19:00:28.029520 sshd-session[4321]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:28.034819 systemd-logind[1491]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:00:28.035110 systemd[1]: sshd@14-46.62.134.149:22-139.178.68.195:43612.service: Deactivated successfully. Jun 20 19:00:28.036728 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:00:28.037930 systemd-logind[1491]: Removed session 15. Jun 20 19:00:28.202488 systemd[1]: Started sshd@15-46.62.134.149:22-139.178.68.195:43618.service - OpenSSH per-connection server daemon (139.178.68.195:43618). Jun 20 19:00:29.176219 sshd[4333]: Accepted publickey for core from 139.178.68.195 port 43618 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:29.177528 sshd-session[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:29.182019 systemd-logind[1491]: New session 16 of user core. Jun 20 19:00:29.186408 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:00:30.843685 sshd[4335]: Connection closed by 139.178.68.195 port 43618 Jun 20 19:00:30.844957 sshd-session[4333]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:30.849373 systemd[1]: sshd@15-46.62.134.149:22-139.178.68.195:43618.service: Deactivated successfully. Jun 20 19:00:30.851494 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:00:30.852518 systemd-logind[1491]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:00:30.853857 systemd-logind[1491]: Removed session 16. Jun 20 19:00:31.017584 systemd[1]: Started sshd@16-46.62.134.149:22-139.178.68.195:43622.service - OpenSSH per-connection server daemon (139.178.68.195:43622). Jun 20 19:00:31.997591 sshd[4352]: Accepted publickey for core from 139.178.68.195 port 43622 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:31.998983 sshd-session[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:32.003958 systemd-logind[1491]: New session 17 of user core. Jun 20 19:00:32.008398 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:00:32.832911 sshd[4354]: Connection closed by 139.178.68.195 port 43622 Jun 20 19:00:32.833476 sshd-session[4352]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:32.836615 systemd[1]: sshd@16-46.62.134.149:22-139.178.68.195:43622.service: Deactivated successfully. Jun 20 19:00:32.838429 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:00:32.839641 systemd-logind[1491]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:00:32.840687 systemd-logind[1491]: Removed session 17. Jun 20 19:00:33.001920 systemd[1]: Started sshd@17-46.62.134.149:22-139.178.68.195:43626.service - OpenSSH per-connection server daemon (139.178.68.195:43626). Jun 20 19:00:33.972007 sshd[4364]: Accepted publickey for core from 139.178.68.195 port 43626 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:33.973330 sshd-session[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:33.978148 systemd-logind[1491]: New session 18 of user core. Jun 20 19:00:33.982401 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:00:34.705643 sshd[4366]: Connection closed by 139.178.68.195 port 43626 Jun 20 19:00:34.706228 sshd-session[4364]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:34.709110 systemd-logind[1491]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:00:34.709689 systemd[1]: sshd@17-46.62.134.149:22-139.178.68.195:43626.service: Deactivated successfully. Jun 20 19:00:34.711372 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:00:34.712521 systemd-logind[1491]: Removed session 18. Jun 20 19:00:39.875308 systemd[1]: Started sshd@18-46.62.134.149:22-139.178.68.195:48494.service - OpenSSH per-connection server daemon (139.178.68.195:48494). Jun 20 19:00:40.848741 sshd[4380]: Accepted publickey for core from 139.178.68.195 port 48494 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:40.850241 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:40.855640 systemd-logind[1491]: New session 19 of user core. Jun 20 19:00:40.857424 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:00:41.581022 sshd[4382]: Connection closed by 139.178.68.195 port 48494 Jun 20 19:00:41.581876 sshd-session[4380]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:41.584822 systemd[1]: sshd@18-46.62.134.149:22-139.178.68.195:48494.service: Deactivated successfully. Jun 20 19:00:41.586539 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:00:41.587810 systemd-logind[1491]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:00:41.589024 systemd-logind[1491]: Removed session 19. Jun 20 19:00:46.757434 systemd[1]: Started sshd@19-46.62.134.149:22-139.178.68.195:58058.service - OpenSSH per-connection server daemon (139.178.68.195:58058). Jun 20 19:00:47.737330 sshd[4396]: Accepted publickey for core from 139.178.68.195 port 58058 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:47.738564 sshd-session[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:47.742574 systemd-logind[1491]: New session 20 of user core. Jun 20 19:00:47.746340 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:00:48.486106 sshd[4398]: Connection closed by 139.178.68.195 port 58058 Jun 20 19:00:48.487007 sshd-session[4396]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:48.492465 systemd[1]: sshd@19-46.62.134.149:22-139.178.68.195:58058.service: Deactivated successfully. Jun 20 19:00:48.496044 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:00:48.497560 systemd-logind[1491]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:00:48.499301 systemd-logind[1491]: Removed session 20. Jun 20 19:00:48.664601 systemd[1]: Started sshd@20-46.62.134.149:22-139.178.68.195:58068.service - OpenSSH per-connection server daemon (139.178.68.195:58068). Jun 20 19:00:49.648440 sshd[4411]: Accepted publickey for core from 139.178.68.195 port 58068 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:49.650638 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:49.661570 systemd-logind[1491]: New session 21 of user core. Jun 20 19:00:49.669474 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:00:51.545375 containerd[1508]: time="2025-06-20T19:00:51.545072059Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:00:51.548234 containerd[1508]: time="2025-06-20T19:00:51.548193240Z" level=info msg="StopContainer for \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\" with timeout 30 (s)" Jun 20 19:00:51.549574 containerd[1508]: time="2025-06-20T19:00:51.549480924Z" level=info msg="Stop container \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\" with signal terminated" Jun 20 19:00:51.550128 containerd[1508]: time="2025-06-20T19:00:51.550028450Z" level=info msg="StopContainer for \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\" with timeout 2 (s)" Jun 20 19:00:51.550474 containerd[1508]: time="2025-06-20T19:00:51.550426937Z" level=info msg="Stop container \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\" with signal terminated" Jun 20 19:00:51.557573 systemd-networkd[1403]: lxc_health: Link DOWN Jun 20 19:00:51.557579 systemd-networkd[1403]: lxc_health: Lost carrier Jun 20 19:00:51.566014 systemd[1]: cri-containerd-83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248.scope: Deactivated successfully. Jun 20 19:00:51.577754 systemd[1]: cri-containerd-9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456.scope: Deactivated successfully. Jun 20 19:00:51.578059 systemd[1]: cri-containerd-9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456.scope: Consumed 6.674s CPU time, 192.3M memory peak, 71.7M read from disk, 13.3M written to disk. Jun 20 19:00:51.598088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248-rootfs.mount: Deactivated successfully. Jun 20 19:00:51.605069 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456-rootfs.mount: Deactivated successfully. Jun 20 19:00:51.610672 containerd[1508]: time="2025-06-20T19:00:51.610618984Z" level=info msg="shim disconnected" id=9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456 namespace=k8s.io Jun 20 19:00:51.610672 containerd[1508]: time="2025-06-20T19:00:51.610667124Z" level=warning msg="cleaning up after shim disconnected" id=9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456 namespace=k8s.io Jun 20 19:00:51.610672 containerd[1508]: time="2025-06-20T19:00:51.610675099Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:51.610937 containerd[1508]: time="2025-06-20T19:00:51.610909129Z" level=info msg="shim disconnected" id=83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248 namespace=k8s.io Jun 20 19:00:51.610987 containerd[1508]: time="2025-06-20T19:00:51.610937000Z" level=warning msg="cleaning up after shim disconnected" id=83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248 namespace=k8s.io Jun 20 19:00:51.610987 containerd[1508]: time="2025-06-20T19:00:51.610945046Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:51.622237 containerd[1508]: time="2025-06-20T19:00:51.622026297Z" level=warning msg="cleanup warnings time=\"2025-06-20T19:00:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 19:00:51.624083 containerd[1508]: time="2025-06-20T19:00:51.624062504Z" level=info msg="StopContainer for \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\" returns successfully" Jun 20 19:00:51.625026 containerd[1508]: time="2025-06-20T19:00:51.624982738Z" level=info msg="StopPodSandbox for \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\"" Jun 20 19:00:51.627672 containerd[1508]: time="2025-06-20T19:00:51.627654958Z" level=info msg="StopContainer for \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\" returns successfully" Jun 20 19:00:51.628070 containerd[1508]: time="2025-06-20T19:00:51.627986809Z" level=info msg="StopPodSandbox for \"14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1\"" Jun 20 19:00:51.634229 containerd[1508]: time="2025-06-20T19:00:51.626037736Z" level=info msg="Container to stop \"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:00:51.634229 containerd[1508]: time="2025-06-20T19:00:51.633024824Z" level=info msg="Container to stop \"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:00:51.634229 containerd[1508]: time="2025-06-20T19:00:51.633034482Z" level=info msg="Container to stop \"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:00:51.634229 containerd[1508]: time="2025-06-20T19:00:51.633044570Z" level=info msg="Container to stop \"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:00:51.634229 containerd[1508]: time="2025-06-20T19:00:51.633051533Z" level=info msg="Container to stop \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:00:51.634229 containerd[1508]: time="2025-06-20T19:00:51.628020042Z" level=info msg="Container to stop \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:00:51.636248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e-shm.mount: Deactivated successfully. Jun 20 19:00:51.642472 systemd[1]: cri-containerd-14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1.scope: Deactivated successfully. Jun 20 19:00:51.643448 systemd[1]: cri-containerd-3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e.scope: Deactivated successfully. Jun 20 19:00:51.675062 containerd[1508]: time="2025-06-20T19:00:51.674963286Z" level=info msg="shim disconnected" id=3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e namespace=k8s.io Jun 20 19:00:51.675062 containerd[1508]: time="2025-06-20T19:00:51.675010453Z" level=warning msg="cleaning up after shim disconnected" id=3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e namespace=k8s.io Jun 20 19:00:51.675062 containerd[1508]: time="2025-06-20T19:00:51.675017205Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:51.678189 containerd[1508]: time="2025-06-20T19:00:51.678129811Z" level=info msg="shim disconnected" id=14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1 namespace=k8s.io Jun 20 19:00:51.678189 containerd[1508]: time="2025-06-20T19:00:51.678178262Z" level=warning msg="cleaning up after shim disconnected" id=14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1 namespace=k8s.io Jun 20 19:00:51.678189 containerd[1508]: time="2025-06-20T19:00:51.678188109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:51.689727 containerd[1508]: time="2025-06-20T19:00:51.689678308Z" level=warning msg="cleanup warnings time=\"2025-06-20T19:00:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 19:00:51.691465 containerd[1508]: time="2025-06-20T19:00:51.691436884Z" level=info msg="TearDown network for sandbox \"14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1\" successfully" Jun 20 19:00:51.691465 containerd[1508]: time="2025-06-20T19:00:51.691457773Z" level=info msg="StopPodSandbox for \"14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1\" returns successfully" Jun 20 19:00:51.692126 containerd[1508]: time="2025-06-20T19:00:51.692086633Z" level=info msg="TearDown network for sandbox \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" successfully" Jun 20 19:00:51.692126 containerd[1508]: time="2025-06-20T19:00:51.692120375Z" level=info msg="StopPodSandbox for \"3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e\" returns successfully" Jun 20 19:00:51.874834 kubelet[2811]: I0620 19:00:51.874796 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-config-path\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875662 kubelet[2811]: I0620 19:00:51.874955 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-host-proc-sys-net\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875662 kubelet[2811]: I0620 19:00:51.874982 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-xtables-lock\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875662 kubelet[2811]: I0620 19:00:51.875023 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rglgc\" (UniqueName: \"kubernetes.io/projected/4188709a-8197-4414-a06c-bc7e73411cf5-kube-api-access-rglgc\") pod \"4188709a-8197-4414-a06c-bc7e73411cf5\" (UID: \"4188709a-8197-4414-a06c-bc7e73411cf5\") " Jun 20 19:00:51.875662 kubelet[2811]: I0620 19:00:51.875055 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-host-proc-sys-kernel\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875662 kubelet[2811]: I0620 19:00:51.875070 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-run\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875662 kubelet[2811]: I0620 19:00:51.875084 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-hostproc\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875804 kubelet[2811]: I0620 19:00:51.875115 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-lib-modules\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875804 kubelet[2811]: I0620 19:00:51.875132 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-clustermesh-secrets\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875804 kubelet[2811]: I0620 19:00:51.875145 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cni-path\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875804 kubelet[2811]: I0620 19:00:51.875161 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mdpx\" (UniqueName: \"kubernetes.io/projected/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-kube-api-access-5mdpx\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875804 kubelet[2811]: I0620 19:00:51.875175 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-bpf-maps\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875804 kubelet[2811]: I0620 19:00:51.875190 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4188709a-8197-4414-a06c-bc7e73411cf5-cilium-config-path\") pod \"4188709a-8197-4414-a06c-bc7e73411cf5\" (UID: \"4188709a-8197-4414-a06c-bc7e73411cf5\") " Jun 20 19:00:51.875954 kubelet[2811]: I0620 19:00:51.875228 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-hubble-tls\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875954 kubelet[2811]: I0620 19:00:51.875242 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-etc-cni-netd\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.875954 kubelet[2811]: I0620 19:00:51.875256 2811 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-cgroup\") pod \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\" (UID: \"4d2b1ee2-2507-4d23-8baa-50d119ad9da7\") " Jun 20 19:00:51.884517 kubelet[2811]: I0620 19:00:51.883804 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:00:51.884614 kubelet[2811]: I0620 19:00:51.884541 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:00:51.884614 kubelet[2811]: I0620 19:00:51.884561 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:00:51.885801 kubelet[2811]: I0620 19:00:51.883180 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:00:51.885801 kubelet[2811]: I0620 19:00:51.885633 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cni-path" (OuterVolumeSpecName: "cni-path") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:00:51.885801 kubelet[2811]: I0620 19:00:51.885653 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:00:51.885801 kubelet[2811]: I0620 19:00:51.885668 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:00:51.885801 kubelet[2811]: I0620 19:00:51.885681 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-hostproc" (OuterVolumeSpecName: "hostproc") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:00:51.885942 kubelet[2811]: I0620 19:00:51.885693 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:00:51.889872 kubelet[2811]: I0620 19:00:51.888838 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jun 20 19:00:51.891463 kubelet[2811]: I0620 19:00:51.890651 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4188709a-8197-4414-a06c-bc7e73411cf5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4188709a-8197-4414-a06c-bc7e73411cf5" (UID: "4188709a-8197-4414-a06c-bc7e73411cf5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jun 20 19:00:51.899912 kubelet[2811]: I0620 19:00:51.899785 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:00:51.899912 kubelet[2811]: I0620 19:00:51.899836 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jun 20 19:00:51.902748 kubelet[2811]: I0620 19:00:51.902684 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:00:51.902748 kubelet[2811]: I0620 19:00:51.902717 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-kube-api-access-5mdpx" (OuterVolumeSpecName: "kube-api-access-5mdpx") pod "4d2b1ee2-2507-4d23-8baa-50d119ad9da7" (UID: "4d2b1ee2-2507-4d23-8baa-50d119ad9da7"). InnerVolumeSpecName "kube-api-access-5mdpx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:00:51.902867 kubelet[2811]: I0620 19:00:51.902750 2811 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4188709a-8197-4414-a06c-bc7e73411cf5-kube-api-access-rglgc" (OuterVolumeSpecName: "kube-api-access-rglgc") pod "4188709a-8197-4414-a06c-bc7e73411cf5" (UID: "4188709a-8197-4414-a06c-bc7e73411cf5"). InnerVolumeSpecName "kube-api-access-rglgc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jun 20 19:00:51.978787 kubelet[2811]: I0620 19:00:51.978728 2811 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-hostproc\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.978787 kubelet[2811]: I0620 19:00:51.978789 2811 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-lib-modules\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.978948 kubelet[2811]: I0620 19:00:51.978809 2811 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-clustermesh-secrets\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.978948 kubelet[2811]: I0620 19:00:51.978823 2811 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cni-path\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.978948 kubelet[2811]: I0620 19:00:51.978836 2811 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5mdpx\" (UniqueName: \"kubernetes.io/projected/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-kube-api-access-5mdpx\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.978948 kubelet[2811]: I0620 19:00:51.978849 2811 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-bpf-maps\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.978948 kubelet[2811]: I0620 19:00:51.978863 2811 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4188709a-8197-4414-a06c-bc7e73411cf5-cilium-config-path\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.978948 kubelet[2811]: I0620 19:00:51.978875 2811 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-hubble-tls\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.978948 kubelet[2811]: I0620 19:00:51.978889 2811 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-etc-cni-netd\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.978948 kubelet[2811]: I0620 19:00:51.978902 2811 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-cgroup\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.979134 kubelet[2811]: I0620 19:00:51.978916 2811 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-config-path\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.979134 kubelet[2811]: I0620 19:00:51.978930 2811 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-host-proc-sys-net\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.979134 kubelet[2811]: I0620 19:00:51.978952 2811 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-xtables-lock\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.979134 kubelet[2811]: I0620 19:00:51.978967 2811 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rglgc\" (UniqueName: \"kubernetes.io/projected/4188709a-8197-4414-a06c-bc7e73411cf5-kube-api-access-rglgc\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.979134 kubelet[2811]: I0620 19:00:51.978980 2811 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-host-proc-sys-kernel\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:51.979134 kubelet[2811]: I0620 19:00:51.978992 2811 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4d2b1ee2-2507-4d23-8baa-50d119ad9da7-cilium-run\") on node \"ci-4230-2-0-5-00d7cf22d6\" DevicePath \"\"" Jun 20 19:00:52.187495 systemd[1]: Removed slice kubepods-besteffort-pod4188709a_8197_4414_a06c_bc7e73411cf5.slice - libcontainer container kubepods-besteffort-pod4188709a_8197_4414_a06c_bc7e73411cf5.slice. Jun 20 19:00:52.190216 kubelet[2811]: I0620 19:00:52.190163 2811 scope.go:117] "RemoveContainer" containerID="83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248" Jun 20 19:00:52.205877 systemd[1]: Removed slice kubepods-burstable-pod4d2b1ee2_2507_4d23_8baa_50d119ad9da7.slice - libcontainer container kubepods-burstable-pod4d2b1ee2_2507_4d23_8baa_50d119ad9da7.slice. Jun 20 19:00:52.205996 systemd[1]: kubepods-burstable-pod4d2b1ee2_2507_4d23_8baa_50d119ad9da7.slice: Consumed 6.738s CPU time, 192.6M memory peak, 72M read from disk, 13.3M written to disk. Jun 20 19:00:52.226032 containerd[1508]: time="2025-06-20T19:00:52.225920383Z" level=info msg="RemoveContainer for \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\"" Jun 20 19:00:52.233139 containerd[1508]: time="2025-06-20T19:00:52.233056871Z" level=info msg="RemoveContainer for \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\" returns successfully" Jun 20 19:00:52.233957 kubelet[2811]: I0620 19:00:52.233915 2811 scope.go:117] "RemoveContainer" containerID="83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248" Jun 20 19:00:52.234376 containerd[1508]: time="2025-06-20T19:00:52.234320349Z" level=error msg="ContainerStatus for \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\": not found" Jun 20 19:00:52.239372 kubelet[2811]: E0620 19:00:52.239270 2811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\": not found" containerID="83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248" Jun 20 19:00:52.241221 kubelet[2811]: I0620 19:00:52.241072 2811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248"} err="failed to get container status \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\": rpc error: code = NotFound desc = an error occurred when try to find container \"83c1fde0091f699ff756f5edecca7afb1c50fd37df4f965fbe674142a2c92248\": not found" Jun 20 19:00:52.241221 kubelet[2811]: I0620 19:00:52.241142 2811 scope.go:117] "RemoveContainer" containerID="9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456" Jun 20 19:00:52.242647 containerd[1508]: time="2025-06-20T19:00:52.242524768Z" level=info msg="RemoveContainer for \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\"" Jun 20 19:00:52.246595 containerd[1508]: time="2025-06-20T19:00:52.246483839Z" level=info msg="RemoveContainer for \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\" returns successfully" Jun 20 19:00:52.246923 kubelet[2811]: I0620 19:00:52.246726 2811 scope.go:117] "RemoveContainer" containerID="b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55" Jun 20 19:00:52.248596 containerd[1508]: time="2025-06-20T19:00:52.248579478Z" level=info msg="RemoveContainer for \"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55\"" Jun 20 19:00:52.252883 containerd[1508]: time="2025-06-20T19:00:52.252263934Z" level=info msg="RemoveContainer for \"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55\" returns successfully" Jun 20 19:00:52.253026 kubelet[2811]: I0620 19:00:52.253001 2811 scope.go:117] "RemoveContainer" containerID="f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da" Jun 20 19:00:52.257977 containerd[1508]: time="2025-06-20T19:00:52.257959391Z" level=info msg="RemoveContainer for \"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da\"" Jun 20 19:00:52.261331 containerd[1508]: time="2025-06-20T19:00:52.261308488Z" level=info msg="RemoveContainer for \"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da\" returns successfully" Jun 20 19:00:52.261531 kubelet[2811]: I0620 19:00:52.261469 2811 scope.go:117] "RemoveContainer" containerID="2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2" Jun 20 19:00:52.262546 containerd[1508]: time="2025-06-20T19:00:52.262529166Z" level=info msg="RemoveContainer for \"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2\"" Jun 20 19:00:52.265064 containerd[1508]: time="2025-06-20T19:00:52.265004196Z" level=info msg="RemoveContainer for \"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2\" returns successfully" Jun 20 19:00:52.265266 kubelet[2811]: I0620 19:00:52.265213 2811 scope.go:117] "RemoveContainer" containerID="a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072" Jun 20 19:00:52.266172 containerd[1508]: time="2025-06-20T19:00:52.266131981Z" level=info msg="RemoveContainer for \"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072\"" Jun 20 19:00:52.269042 containerd[1508]: time="2025-06-20T19:00:52.269006048Z" level=info msg="RemoveContainer for \"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072\" returns successfully" Jun 20 19:00:52.269278 kubelet[2811]: I0620 19:00:52.269242 2811 scope.go:117] "RemoveContainer" containerID="9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456" Jun 20 19:00:52.269538 containerd[1508]: time="2025-06-20T19:00:52.269513238Z" level=error msg="ContainerStatus for \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\": not found" Jun 20 19:00:52.269645 kubelet[2811]: E0620 19:00:52.269618 2811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\": not found" containerID="9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456" Jun 20 19:00:52.269717 kubelet[2811]: I0620 19:00:52.269647 2811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456"} err="failed to get container status \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c61640d21b0c90f606b0bca59816c33b5110534197adce0e9b380ca1bf57456\": not found" Jun 20 19:00:52.269717 kubelet[2811]: I0620 19:00:52.269665 2811 scope.go:117] "RemoveContainer" containerID="b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55" Jun 20 19:00:52.269955 containerd[1508]: time="2025-06-20T19:00:52.269904231Z" level=error msg="ContainerStatus for \"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55\": not found" Jun 20 19:00:52.270073 kubelet[2811]: E0620 19:00:52.270032 2811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55\": not found" containerID="b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55" Jun 20 19:00:52.270148 kubelet[2811]: I0620 19:00:52.270071 2811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55"} err="failed to get container status \"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55\": rpc error: code = NotFound desc = an error occurred when try to find container \"b590af48e806a7b1a15a956ab7645413b7dea6627dd9d50764f3207d592aab55\": not found" Jun 20 19:00:52.270148 kubelet[2811]: I0620 19:00:52.270084 2811 scope.go:117] "RemoveContainer" containerID="f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da" Jun 20 19:00:52.270245 containerd[1508]: time="2025-06-20T19:00:52.270225713Z" level=error msg="ContainerStatus for \"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da\": not found" Jun 20 19:00:52.270324 kubelet[2811]: E0620 19:00:52.270309 2811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da\": not found" containerID="f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da" Jun 20 19:00:52.270357 kubelet[2811]: I0620 19:00:52.270327 2811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da"} err="failed to get container status \"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da\": rpc error: code = NotFound desc = an error occurred when try to find container \"f9527cd30bfec36a8cc2906f1441214528c113bad01da8b6e749f9ce3e9783da\": not found" Jun 20 19:00:52.270428 kubelet[2811]: I0620 19:00:52.270338 2811 scope.go:117] "RemoveContainer" containerID="2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2" Jun 20 19:00:52.270701 containerd[1508]: time="2025-06-20T19:00:52.270600687Z" level=error msg="ContainerStatus for \"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2\": not found" Jun 20 19:00:52.270795 kubelet[2811]: E0620 19:00:52.270772 2811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2\": not found" containerID="2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2" Jun 20 19:00:52.270795 kubelet[2811]: I0620 19:00:52.270791 2811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2"} err="failed to get container status \"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2\": rpc error: code = NotFound desc = an error occurred when try to find container \"2592ab9c363c039b858d53b06fe24c9f35d66c2962962a87142d80dfae04dde2\": not found" Jun 20 19:00:52.270855 kubelet[2811]: I0620 19:00:52.270801 2811 scope.go:117] "RemoveContainer" containerID="a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072" Jun 20 19:00:52.270932 containerd[1508]: time="2025-06-20T19:00:52.270903034Z" level=error msg="ContainerStatus for \"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072\": not found" Jun 20 19:00:52.271045 kubelet[2811]: E0620 19:00:52.271022 2811 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072\": not found" containerID="a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072" Jun 20 19:00:52.271128 kubelet[2811]: I0620 19:00:52.271042 2811 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072"} err="failed to get container status \"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072\": rpc error: code = NotFound desc = an error occurred when try to find container \"a5f3df25db5daa001ce6846785e276f6bf64daae0545b30397c2434f63159072\": not found" Jun 20 19:00:52.531020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1-rootfs.mount: Deactivated successfully. Jun 20 19:00:52.531191 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14fa2c363a97cd159c880ab6a4bb9b8327a44bd6be906d6a6b0d74dbdca10aa1-shm.mount: Deactivated successfully. Jun 20 19:00:52.531318 systemd[1]: var-lib-kubelet-pods-4188709a\x2d8197\x2d4414\x2da06c\x2dbc7e73411cf5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drglgc.mount: Deactivated successfully. Jun 20 19:00:52.531414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cff5cf44805808026602570f2fcf5f62e59f3b3395e389053b0b1aa138b311e-rootfs.mount: Deactivated successfully. Jun 20 19:00:52.531505 systemd[1]: var-lib-kubelet-pods-4d2b1ee2\x2d2507\x2d4d23\x2d8baa\x2d50d119ad9da7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5mdpx.mount: Deactivated successfully. Jun 20 19:00:52.531600 systemd[1]: var-lib-kubelet-pods-4d2b1ee2\x2d2507\x2d4d23\x2d8baa\x2d50d119ad9da7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:00:52.531710 systemd[1]: var-lib-kubelet-pods-4d2b1ee2\x2d2507\x2d4d23\x2d8baa\x2d50d119ad9da7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:00:53.512958 kubelet[2811]: I0620 19:00:53.512908 2811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4188709a-8197-4414-a06c-bc7e73411cf5" path="/var/lib/kubelet/pods/4188709a-8197-4414-a06c-bc7e73411cf5/volumes" Jun 20 19:00:53.513350 kubelet[2811]: I0620 19:00:53.513323 2811 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d2b1ee2-2507-4d23-8baa-50d119ad9da7" path="/var/lib/kubelet/pods/4d2b1ee2-2507-4d23-8baa-50d119ad9da7/volumes" Jun 20 19:00:53.565090 sshd[4413]: Connection closed by 139.178.68.195 port 58068 Jun 20 19:00:53.565754 sshd-session[4411]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:53.569532 systemd-logind[1491]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:00:53.570144 systemd[1]: sshd@20-46.62.134.149:22-139.178.68.195:58068.service: Deactivated successfully. Jun 20 19:00:53.572602 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:00:53.573832 systemd-logind[1491]: Removed session 21. Jun 20 19:00:53.735473 systemd[1]: Started sshd@21-46.62.134.149:22-139.178.68.195:54034.service - OpenSSH per-connection server daemon (139.178.68.195:54034). Jun 20 19:00:54.707271 sshd[4574]: Accepted publickey for core from 139.178.68.195 port 54034 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:54.708578 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:54.713325 systemd-logind[1491]: New session 22 of user core. Jun 20 19:00:54.720340 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:00:55.666908 kubelet[2811]: E0620 19:00:55.666780 2811 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:00:56.032613 systemd[1]: Created slice kubepods-burstable-podc8b8f5e5_e0cf_45ad_bd67_920064acd1a7.slice - libcontainer container kubepods-burstable-podc8b8f5e5_e0cf_45ad_bd67_920064acd1a7.slice. Jun 20 19:00:56.119521 kubelet[2811]: I0620 19:00:56.119475 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-bpf-maps\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.119844 kubelet[2811]: I0620 19:00:56.119533 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-etc-cni-netd\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.119844 kubelet[2811]: I0620 19:00:56.119571 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-xtables-lock\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.119844 kubelet[2811]: I0620 19:00:56.119595 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-host-proc-sys-kernel\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.121561 kubelet[2811]: I0620 19:00:56.121502 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-cni-path\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.121561 kubelet[2811]: I0620 19:00:56.121552 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-host-proc-sys-net\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.121695 kubelet[2811]: I0620 19:00:56.121587 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-cilium-config-path\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.121695 kubelet[2811]: I0620 19:00:56.121627 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-hubble-tls\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.121695 kubelet[2811]: I0620 19:00:56.121654 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2ctr\" (UniqueName: \"kubernetes.io/projected/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-kube-api-access-x2ctr\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.121695 kubelet[2811]: I0620 19:00:56.121684 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-cilium-run\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.121842 kubelet[2811]: I0620 19:00:56.121705 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-lib-modules\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.121842 kubelet[2811]: I0620 19:00:56.121726 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-hostproc\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.121842 kubelet[2811]: I0620 19:00:56.121753 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-clustermesh-secrets\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.121842 kubelet[2811]: I0620 19:00:56.121775 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-cilium-ipsec-secrets\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.121842 kubelet[2811]: I0620 19:00:56.121799 2811 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c8b8f5e5-e0cf-45ad-bd67-920064acd1a7-cilium-cgroup\") pod \"cilium-ccg8q\" (UID: \"c8b8f5e5-e0cf-45ad-bd67-920064acd1a7\") " pod="kube-system/cilium-ccg8q" Jun 20 19:00:56.122581 sshd[4576]: Connection closed by 139.178.68.195 port 54034 Jun 20 19:00:56.123522 sshd-session[4574]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:56.128719 systemd-logind[1491]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:00:56.129693 systemd[1]: sshd@21-46.62.134.149:22-139.178.68.195:54034.service: Deactivated successfully. Jun 20 19:00:56.132668 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:00:56.134376 systemd-logind[1491]: Removed session 22. Jun 20 19:00:56.297309 systemd[1]: Started sshd@22-46.62.134.149:22-139.178.68.195:54044.service - OpenSSH per-connection server daemon (139.178.68.195:54044). Jun 20 19:00:56.337146 containerd[1508]: time="2025-06-20T19:00:56.336986099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ccg8q,Uid:c8b8f5e5-e0cf-45ad-bd67-920064acd1a7,Namespace:kube-system,Attempt:0,}" Jun 20 19:00:56.382893 containerd[1508]: time="2025-06-20T19:00:56.382362518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 20 19:00:56.382893 containerd[1508]: time="2025-06-20T19:00:56.382433511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 20 19:00:56.382893 containerd[1508]: time="2025-06-20T19:00:56.382458518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:56.382893 containerd[1508]: time="2025-06-20T19:00:56.382550971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 20 19:00:56.403400 systemd[1]: Started cri-containerd-496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986.scope - libcontainer container 496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986. Jun 20 19:00:56.425658 containerd[1508]: time="2025-06-20T19:00:56.425483577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ccg8q,Uid:c8b8f5e5-e0cf-45ad-bd67-920064acd1a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986\"" Jun 20 19:00:56.440657 containerd[1508]: time="2025-06-20T19:00:56.440623009Z" level=info msg="CreateContainer within sandbox \"496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:00:56.450196 containerd[1508]: time="2025-06-20T19:00:56.450159076Z" level=info msg="CreateContainer within sandbox \"496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"301eccfa04f54d04a15adfa528251ebf839cd6d1b5c0d664a3c9b7a1f5b4d4d7\"" Jun 20 19:00:56.451748 containerd[1508]: time="2025-06-20T19:00:56.450637543Z" level=info msg="StartContainer for \"301eccfa04f54d04a15adfa528251ebf839cd6d1b5c0d664a3c9b7a1f5b4d4d7\"" Jun 20 19:00:56.471347 systemd[1]: Started cri-containerd-301eccfa04f54d04a15adfa528251ebf839cd6d1b5c0d664a3c9b7a1f5b4d4d7.scope - libcontainer container 301eccfa04f54d04a15adfa528251ebf839cd6d1b5c0d664a3c9b7a1f5b4d4d7. Jun 20 19:00:56.493074 containerd[1508]: time="2025-06-20T19:00:56.493046036Z" level=info msg="StartContainer for \"301eccfa04f54d04a15adfa528251ebf839cd6d1b5c0d664a3c9b7a1f5b4d4d7\" returns successfully" Jun 20 19:00:56.504106 systemd[1]: cri-containerd-301eccfa04f54d04a15adfa528251ebf839cd6d1b5c0d664a3c9b7a1f5b4d4d7.scope: Deactivated successfully. Jun 20 19:00:56.504536 systemd[1]: cri-containerd-301eccfa04f54d04a15adfa528251ebf839cd6d1b5c0d664a3c9b7a1f5b4d4d7.scope: Consumed 16ms CPU time, 9.6M memory peak, 2.9M read from disk. Jun 20 19:00:56.528438 containerd[1508]: time="2025-06-20T19:00:56.528359468Z" level=info msg="shim disconnected" id=301eccfa04f54d04a15adfa528251ebf839cd6d1b5c0d664a3c9b7a1f5b4d4d7 namespace=k8s.io Jun 20 19:00:56.528438 containerd[1508]: time="2025-06-20T19:00:56.528430602Z" level=warning msg="cleaning up after shim disconnected" id=301eccfa04f54d04a15adfa528251ebf839cd6d1b5c0d664a3c9b7a1f5b4d4d7 namespace=k8s.io Jun 20 19:00:56.528438 containerd[1508]: time="2025-06-20T19:00:56.528437986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:57.226877 containerd[1508]: time="2025-06-20T19:00:57.226690523Z" level=info msg="CreateContainer within sandbox \"496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:00:57.257729 containerd[1508]: time="2025-06-20T19:00:57.257654203Z" level=info msg="CreateContainer within sandbox \"496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"73510a934627431aa9cfe511d3b89a39f7496073885dc3219a09e319aaaadb43\"" Jun 20 19:00:57.260371 containerd[1508]: time="2025-06-20T19:00:57.259906606Z" level=info msg="StartContainer for \"73510a934627431aa9cfe511d3b89a39f7496073885dc3219a09e319aaaadb43\"" Jun 20 19:00:57.271026 sshd[4591]: Accepted publickey for core from 139.178.68.195 port 54044 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:57.272607 sshd-session[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:57.290191 systemd-logind[1491]: New session 23 of user core. Jun 20 19:00:57.295315 systemd[1]: Started cri-containerd-73510a934627431aa9cfe511d3b89a39f7496073885dc3219a09e319aaaadb43.scope - libcontainer container 73510a934627431aa9cfe511d3b89a39f7496073885dc3219a09e319aaaadb43. Jun 20 19:00:57.296469 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:00:57.315190 containerd[1508]: time="2025-06-20T19:00:57.315117262Z" level=info msg="StartContainer for \"73510a934627431aa9cfe511d3b89a39f7496073885dc3219a09e319aaaadb43\" returns successfully" Jun 20 19:00:57.321949 systemd[1]: cri-containerd-73510a934627431aa9cfe511d3b89a39f7496073885dc3219a09e319aaaadb43.scope: Deactivated successfully. Jun 20 19:00:57.322180 systemd[1]: cri-containerd-73510a934627431aa9cfe511d3b89a39f7496073885dc3219a09e319aaaadb43.scope: Consumed 12ms CPU time, 7.6M memory peak, 2.1M read from disk. Jun 20 19:00:57.335169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73510a934627431aa9cfe511d3b89a39f7496073885dc3219a09e319aaaadb43-rootfs.mount: Deactivated successfully. Jun 20 19:00:57.340551 containerd[1508]: time="2025-06-20T19:00:57.340501291Z" level=info msg="shim disconnected" id=73510a934627431aa9cfe511d3b89a39f7496073885dc3219a09e319aaaadb43 namespace=k8s.io Jun 20 19:00:57.340551 containerd[1508]: time="2025-06-20T19:00:57.340545534Z" level=warning msg="cleaning up after shim disconnected" id=73510a934627431aa9cfe511d3b89a39f7496073885dc3219a09e319aaaadb43 namespace=k8s.io Jun 20 19:00:57.340820 containerd[1508]: time="2025-06-20T19:00:57.340553349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:57.946366 sshd[4718]: Connection closed by 139.178.68.195 port 54044 Jun 20 19:00:57.947298 sshd-session[4591]: pam_unix(sshd:session): session closed for user core Jun 20 19:00:57.953560 systemd[1]: sshd@22-46.62.134.149:22-139.178.68.195:54044.service: Deactivated successfully. Jun 20 19:00:57.957928 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:00:57.959683 systemd-logind[1491]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:00:57.961565 systemd-logind[1491]: Removed session 23. Jun 20 19:00:58.123648 systemd[1]: Started sshd@23-46.62.134.149:22-139.178.68.195:54058.service - OpenSSH per-connection server daemon (139.178.68.195:54058). Jun 20 19:00:58.239422 containerd[1508]: time="2025-06-20T19:00:58.237713973Z" level=info msg="CreateContainer within sandbox \"496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:00:58.287527 containerd[1508]: time="2025-06-20T19:00:58.287381159Z" level=info msg="CreateContainer within sandbox \"496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5be661e1ae19d132084a35412afc7b243f8790400fb3c5e8258035ce79e50cac\"" Jun 20 19:00:58.289298 containerd[1508]: time="2025-06-20T19:00:58.289270822Z" level=info msg="StartContainer for \"5be661e1ae19d132084a35412afc7b243f8790400fb3c5e8258035ce79e50cac\"" Jun 20 19:00:58.329351 systemd[1]: Started cri-containerd-5be661e1ae19d132084a35412afc7b243f8790400fb3c5e8258035ce79e50cac.scope - libcontainer container 5be661e1ae19d132084a35412afc7b243f8790400fb3c5e8258035ce79e50cac. Jun 20 19:00:58.371799 containerd[1508]: time="2025-06-20T19:00:58.371757878Z" level=info msg="StartContainer for \"5be661e1ae19d132084a35412afc7b243f8790400fb3c5e8258035ce79e50cac\" returns successfully" Jun 20 19:00:58.377269 systemd[1]: cri-containerd-5be661e1ae19d132084a35412afc7b243f8790400fb3c5e8258035ce79e50cac.scope: Deactivated successfully. Jun 20 19:00:58.391668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5be661e1ae19d132084a35412afc7b243f8790400fb3c5e8258035ce79e50cac-rootfs.mount: Deactivated successfully. Jun 20 19:00:58.397663 containerd[1508]: time="2025-06-20T19:00:58.397599666Z" level=info msg="shim disconnected" id=5be661e1ae19d132084a35412afc7b243f8790400fb3c5e8258035ce79e50cac namespace=k8s.io Jun 20 19:00:58.397663 containerd[1508]: time="2025-06-20T19:00:58.397657926Z" level=warning msg="cleaning up after shim disconnected" id=5be661e1ae19d132084a35412afc7b243f8790400fb3c5e8258035ce79e50cac namespace=k8s.io Jun 20 19:00:58.397860 containerd[1508]: time="2025-06-20T19:00:58.397666602Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:59.115346 sshd[4765]: Accepted publickey for core from 139.178.68.195 port 54058 ssh2: RSA SHA256:ttetsIDeDmoOcWQonGIv9sNpOX/TzQSVP7aimMY1zAQ Jun 20 19:00:59.116826 sshd-session[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:00:59.121395 systemd-logind[1491]: New session 24 of user core. Jun 20 19:00:59.127590 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:00:59.235590 containerd[1508]: time="2025-06-20T19:00:59.235492136Z" level=info msg="CreateContainer within sandbox \"496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:00:59.246183 containerd[1508]: time="2025-06-20T19:00:59.246136203Z" level=info msg="CreateContainer within sandbox \"496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ed24f91cf6a9455ec2e3701c30029eafd0f394aa83e7782babcc49976cba215\"" Jun 20 19:00:59.246925 containerd[1508]: time="2025-06-20T19:00:59.246869258Z" level=info msg="StartContainer for \"2ed24f91cf6a9455ec2e3701c30029eafd0f394aa83e7782babcc49976cba215\"" Jun 20 19:00:59.303375 systemd[1]: Started cri-containerd-2ed24f91cf6a9455ec2e3701c30029eafd0f394aa83e7782babcc49976cba215.scope - libcontainer container 2ed24f91cf6a9455ec2e3701c30029eafd0f394aa83e7782babcc49976cba215. Jun 20 19:00:59.323860 systemd[1]: cri-containerd-2ed24f91cf6a9455ec2e3701c30029eafd0f394aa83e7782babcc49976cba215.scope: Deactivated successfully. Jun 20 19:00:59.325447 containerd[1508]: time="2025-06-20T19:00:59.325412233Z" level=info msg="StartContainer for \"2ed24f91cf6a9455ec2e3701c30029eafd0f394aa83e7782babcc49976cba215\" returns successfully" Jun 20 19:00:59.344456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ed24f91cf6a9455ec2e3701c30029eafd0f394aa83e7782babcc49976cba215-rootfs.mount: Deactivated successfully. Jun 20 19:00:59.352617 containerd[1508]: time="2025-06-20T19:00:59.352527810Z" level=info msg="shim disconnected" id=2ed24f91cf6a9455ec2e3701c30029eafd0f394aa83e7782babcc49976cba215 namespace=k8s.io Jun 20 19:00:59.352617 containerd[1508]: time="2025-06-20T19:00:59.352598773Z" level=warning msg="cleaning up after shim disconnected" id=2ed24f91cf6a9455ec2e3701c30029eafd0f394aa83e7782babcc49976cba215 namespace=k8s.io Jun 20 19:00:59.352617 containerd[1508]: time="2025-06-20T19:00:59.352613191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:00:59.363434 containerd[1508]: time="2025-06-20T19:00:59.363383112Z" level=warning msg="cleanup warnings time=\"2025-06-20T19:00:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 20 19:01:00.245975 containerd[1508]: time="2025-06-20T19:01:00.244953410Z" level=info msg="CreateContainer within sandbox \"496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:01:00.267715 containerd[1508]: time="2025-06-20T19:01:00.267612823Z" level=info msg="CreateContainer within sandbox \"496ed9dbcc228a6259aec7e780cefd377ef5acb98ec88ba20e9d8b1eab5f1986\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"145e55bea20d23ea2b41c913344e079fa0d2de0fbdb1fd57b486671c654402ab\"" Jun 20 19:01:00.271280 containerd[1508]: time="2025-06-20T19:01:00.269957810Z" level=info msg="StartContainer for \"145e55bea20d23ea2b41c913344e079fa0d2de0fbdb1fd57b486671c654402ab\"" Jun 20 19:01:00.322342 systemd[1]: Started cri-containerd-145e55bea20d23ea2b41c913344e079fa0d2de0fbdb1fd57b486671c654402ab.scope - libcontainer container 145e55bea20d23ea2b41c913344e079fa0d2de0fbdb1fd57b486671c654402ab. Jun 20 19:01:00.342464 containerd[1508]: time="2025-06-20T19:01:00.342435898Z" level=info msg="StartContainer for \"145e55bea20d23ea2b41c913344e079fa0d2de0fbdb1fd57b486671c654402ab\" returns successfully" Jun 20 19:01:00.360076 systemd[1]: run-containerd-runc-k8s.io-145e55bea20d23ea2b41c913344e079fa0d2de0fbdb1fd57b486671c654402ab-runc.5HzsTl.mount: Deactivated successfully. Jun 20 19:01:00.755644 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jun 20 19:01:01.257812 kubelet[2811]: I0620 19:01:01.257752 2811 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ccg8q" podStartSLOduration=6.257735786 podStartE2EDuration="6.257735786s" podCreationTimestamp="2025-06-20 19:00:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:01:01.25686914 +0000 UTC m=+345.841393710" watchObservedRunningTime="2025-06-20 19:01:01.257735786 +0000 UTC m=+345.842260356" Jun 20 19:01:03.395299 systemd-networkd[1403]: lxc_health: Link UP Jun 20 19:01:03.405080 systemd-networkd[1403]: lxc_health: Gained carrier Jun 20 19:01:04.515371 systemd-networkd[1403]: lxc_health: Gained IPv6LL Jun 20 19:01:08.669257 sshd[4823]: Connection closed by 139.178.68.195 port 54058 Jun 20 19:01:08.670844 sshd-session[4765]: pam_unix(sshd:session): session closed for user core Jun 20 19:01:08.675846 systemd[1]: sshd@23-46.62.134.149:22-139.178.68.195:54058.service: Deactivated successfully. Jun 20 19:01:08.675850 systemd-logind[1491]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:01:08.678471 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:01:08.679963 systemd-logind[1491]: Removed session 24.